00:00:00.000 Started by upstream project "autotest-per-patch" build number 132840 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.139 Fetching changes from the remote Git repository 00:00:00.141 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.278 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.278 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.567 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.580 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.592 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.592 > git config core.sparsecheckout # timeout=10 00:00:06.601 > git read-tree -mu HEAD # timeout=10 00:00:06.615 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.639 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.639 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.729 [Pipeline] Start of Pipeline 00:00:06.742 [Pipeline] library 00:00:06.744 Loading library shm_lib@master 00:00:06.744 Library shm_lib@master is cached. Copying from home. 00:00:06.761 [Pipeline] node 00:00:06.770 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.772 [Pipeline] { 00:00:06.782 [Pipeline] catchError 00:00:06.783 [Pipeline] { 00:00:06.793 [Pipeline] wrap 00:00:06.800 [Pipeline] { 00:00:06.807 [Pipeline] stage 00:00:06.809 [Pipeline] { (Prologue) 00:00:07.134 [Pipeline] sh 00:00:07.418 + logger -p user.info -t JENKINS-CI 00:00:07.457 [Pipeline] echo 00:00:07.460 Node: GP11 00:00:07.481 [Pipeline] sh 00:00:07.791 [Pipeline] setCustomBuildProperty 00:00:07.802 [Pipeline] echo 00:00:07.804 Cleanup processes 00:00:07.810 [Pipeline] sh 00:00:08.096 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.096 4056423 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.111 [Pipeline] sh 00:00:08.394 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.394 ++ awk '{print $1}' 00:00:08.394 ++ grep -v 'sudo pgrep' 00:00:08.394 + sudo kill -9 00:00:08.394 + true 00:00:08.408 [Pipeline] cleanWs 00:00:08.417 [WS-CLEANUP] Deleting project workspace... 00:00:08.418 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.423 [WS-CLEANUP] done 00:00:08.427 [Pipeline] setCustomBuildProperty 00:00:08.440 [Pipeline] sh 00:00:08.724 + sudo git config --global --replace-all safe.directory '*' 00:00:08.828 [Pipeline] httpRequest 00:00:11.883 [Pipeline] echo 00:00:11.885 Sorcerer 10.211.164.101 is dead 00:00:11.893 [Pipeline] httpRequest 00:00:14.933 [Pipeline] echo 00:00:14.935 Sorcerer 10.211.164.101 is dead 00:00:14.944 [Pipeline] httpRequest 00:00:15.003 [Pipeline] echo 00:00:15.005 Sorcerer 10.211.164.96 is dead 00:00:15.014 [Pipeline] httpRequest 00:00:15.318 [Pipeline] echo 00:00:15.320 Sorcerer 10.211.164.20 is alive 00:00:15.331 [Pipeline] retry 00:00:15.332 [Pipeline] { 00:00:15.347 [Pipeline] httpRequest 00:00:15.352 HttpMethod: GET 00:00:15.352 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.353 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.356 Response Code: HTTP/1.1 200 OK 00:00:15.357 Success: Status code 200 is in the accepted range: 200,404 00:00:15.357 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.845 [Pipeline] } 00:00:15.862 [Pipeline] // retry 00:00:15.869 [Pipeline] sh 00:00:16.155 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.172 [Pipeline] httpRequest 00:00:16.673 [Pipeline] echo 00:00:16.675 Sorcerer 10.211.164.20 is alive 00:00:16.684 [Pipeline] retry 00:00:16.686 [Pipeline] { 00:00:16.700 [Pipeline] httpRequest 00:00:16.705 HttpMethod: GET 00:00:16.705 URL: http://10.211.164.20/packages/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:16.706 Sending request to url: http://10.211.164.20/packages/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:16.708 Response Code: HTTP/1.1 404 Not Found 00:00:16.709 Success: Status code 404 is in the accepted range: 200,404 00:00:16.709 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:16.712 [Pipeline] } 00:00:16.729 [Pipeline] // retry 00:00:16.737 [Pipeline] sh 00:00:17.024 + rm -f spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:17.038 [Pipeline] retry 00:00:17.040 [Pipeline] { 00:00:17.060 [Pipeline] checkout 00:00:17.068 The recommended git tool is: NONE 00:00:18.839 using credential 00000000-0000-0000-0000-000000000002 00:00:18.841 Wiping out workspace first. 00:00:18.850 Cloning the remote Git repository 00:00:18.852 Honoring refspec on initial clone 00:00:18.867 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:18.881 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk # timeout=10 00:00:18.895 Using reference repository: /var/ci_repos/spdk_multi 00:00:18.896 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:18.896 > git --version # timeout=10 00:00:18.899 > git --version # 'git version 2.45.2' 00:00:18.899 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:18.905 Setting http proxy: proxy-dmz.intel.com:911 00:00:18.905 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/24/25524/6 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:01:42.607 Avoid second fetch 00:01:42.637 Checking out Revision 2104eacf0c136776cfdaa3ea9c187a7522b3ede0 (FETCH_HEAD) 00:01:43.125 Commit message: "test/check_so_deps: use VERSION to look for prior tags" 00:01:42.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:01:42.115 > git config --add remote.origin.fetch refs/changes/24/25524/6 # timeout=10 00:01:42.119 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:01:42.609 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:42.628 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:42.644 > git config core.sparsecheckout # timeout=10 00:01:42.648 > git checkout -f 2104eacf0c136776cfdaa3ea9c187a7522b3ede0 # timeout=10 00:01:43.127 > git rev-list --no-walk 6263899172182e027030cd18a9502d00497c00eb # timeout=10 00:01:43.163 > git remote # timeout=10 00:01:43.174 > git submodule init # timeout=10 00:01:43.239 > git submodule sync # timeout=10 00:01:43.288 > git config --get remote.origin.url # timeout=10 00:01:43.298 > git submodule init # timeout=10 00:01:43.345 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:01:43.350 > git config --get submodule.dpdk.url # timeout=10 00:01:43.354 > git remote # timeout=10 00:01:43.358 > git config --get remote.origin.url # timeout=10 00:01:43.365 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:01:43.370 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:01:43.374 > git remote # timeout=10 00:01:43.378 > git config --get remote.origin.url # timeout=10 00:01:43.382 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:01:43.386 > git config --get submodule.isa-l.url # timeout=10 00:01:43.389 > git remote # timeout=10 00:01:43.393 > git config --get remote.origin.url # timeout=10 00:01:43.396 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:01:43.400 > git config --get submodule.ocf.url # timeout=10 00:01:43.404 > git remote # timeout=10 00:01:43.407 > git config --get remote.origin.url # timeout=10 00:01:43.411 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:01:43.414 > git config --get submodule.libvfio-user.url # timeout=10 00:01:43.417 > git remote # timeout=10 00:01:43.421 > git config --get remote.origin.url # timeout=10 00:01:43.424 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:01:43.428 > git config --get submodule.xnvme.url # timeout=10 00:01:43.432 > git remote # timeout=10 00:01:43.435 > git config --get remote.origin.url # timeout=10 00:01:43.439 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:01:43.442 > git config --get submodule.isa-l-crypto.url # timeout=10 00:01:43.446 > git remote # timeout=10 00:01:43.449 > git config --get remote.origin.url # timeout=10 00:01:43.453 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:01:43.475 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:43.476 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:43.476 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:43.476 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:43.476 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:43.476 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:43.476 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:43.496 Setting http proxy: proxy-dmz.intel.com:911 00:01:43.496 Setting http proxy: proxy-dmz.intel.com:911 00:01:43.496 Setting http proxy: proxy-dmz.intel.com:911 00:01:43.496 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:01:43.496 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:01:43.496 Setting http proxy: proxy-dmz.intel.com:911 00:01:43.496 Setting http proxy: proxy-dmz.intel.com:911 00:01:43.496 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:01:43.496 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:01:43.497 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:01:43.497 Setting http proxy: proxy-dmz.intel.com:911 00:01:43.497 Setting http proxy: proxy-dmz.intel.com:911 00:01:43.497 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:01:43.497 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:01:53.518 [Pipeline] dir 00:01:53.519 Running in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.521 [Pipeline] { 00:01:53.536 [Pipeline] sh 00:01:53.824 ++ nproc 00:01:53.824 + threads=48 00:01:53.824 + git repack -a -d --threads=48 00:02:00.403 + git submodule foreach git repack -a -d --threads=48 00:02:00.403 Entering 'dpdk' 00:02:10.393 Entering 'intel-ipsec-mb' 00:02:10.652 Entering 'isa-l' 00:02:10.911 Entering 'isa-l-crypto' 00:02:11.169 Entering 'libvfio-user' 00:02:11.427 Entering 'ocf' 00:02:11.995 Entering 'xnvme' 00:02:12.563 + find .git -type f -name alternates -print -delete 00:02:12.564 .git/objects/info/alternates 00:02:12.564 .git/modules/xnvme/objects/info/alternates 00:02:12.564 .git/modules/isa-l/objects/info/alternates 00:02:12.564 .git/modules/ocf/objects/info/alternates 00:02:12.564 .git/modules/dpdk/objects/info/alternates 00:02:12.564 .git/modules/libvfio-user/objects/info/alternates 00:02:12.564 .git/modules/intel-ipsec-mb/objects/info/alternates 00:02:12.564 .git/modules/isa-l-crypto/objects/info/alternates 00:02:12.578 [Pipeline] } 00:02:12.596 [Pipeline] // dir 00:02:12.602 [Pipeline] } 00:02:12.618 [Pipeline] // retry 00:02:12.627 [Pipeline] sh 00:02:12.909 + hash pigz 00:02:12.909 + tar -cf spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz -I pigz spdk 00:02:13.490 [Pipeline] retry 00:02:13.492 [Pipeline] { 00:02:13.506 [Pipeline] httpRequest 00:02:13.513 HttpMethod: PUT 00:02:13.514 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:02:13.526 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:02:39.475 Response Code: HTTP/1.1 200 OK 00:02:39.484 Success: Status code 200 is in the accepted range: 200 00:02:39.487 [Pipeline] } 00:02:39.504 [Pipeline] // retry 00:02:39.512 [Pipeline] echo 00:02:39.513 00:02:39.513 Locking 00:02:39.513 Waited 23s for lock 00:02:39.513 File already exists: /storage/packages/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:02:39.513 00:02:39.517 [Pipeline] sh 00:02:39.806 + git -C spdk log --oneline -n5 00:02:39.806 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:02:39.806 66289a6db build: use VERSION file for storing version 00:02:39.806 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:39.806 cec5ba284 nvme/rdma: Register UMR per IO request 00:02:39.806 7219bd1a7 thread: use extended version of fd group add 00:02:39.815 [Pipeline] } 00:02:39.830 [Pipeline] // stage 00:02:39.840 [Pipeline] stage 00:02:39.842 [Pipeline] { (Prepare) 00:02:39.859 [Pipeline] writeFile 00:02:39.876 [Pipeline] sh 00:02:40.161 + logger -p user.info -t JENKINS-CI 00:02:40.174 [Pipeline] sh 00:02:40.460 + logger -p user.info -t JENKINS-CI 00:02:40.473 [Pipeline] sh 00:02:40.755 + cat autorun-spdk.conf 00:02:40.755 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.755 SPDK_TEST_NVMF=1 00:02:40.755 SPDK_TEST_NVME_CLI=1 00:02:40.755 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.755 SPDK_TEST_NVMF_NICS=e810 00:02:40.755 SPDK_TEST_VFIOUSER=1 00:02:40.755 SPDK_RUN_UBSAN=1 00:02:40.755 NET_TYPE=phy 00:02:40.764 RUN_NIGHTLY=0 00:02:40.769 [Pipeline] readFile 00:02:40.796 [Pipeline] withEnv 00:02:40.799 [Pipeline] { 00:02:40.811 [Pipeline] sh 00:02:41.098 + set -ex 00:02:41.098 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:41.098 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:41.098 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:41.098 ++ SPDK_TEST_NVMF=1 00:02:41.098 ++ SPDK_TEST_NVME_CLI=1 00:02:41.098 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:41.098 ++ SPDK_TEST_NVMF_NICS=e810 00:02:41.098 ++ SPDK_TEST_VFIOUSER=1 00:02:41.099 ++ SPDK_RUN_UBSAN=1 00:02:41.099 ++ NET_TYPE=phy 00:02:41.099 ++ RUN_NIGHTLY=0 00:02:41.099 + case $SPDK_TEST_NVMF_NICS in 00:02:41.099 + DRIVERS=ice 00:02:41.099 + [[ tcp == \r\d\m\a ]] 00:02:41.099 + [[ -n ice ]] 00:02:41.099 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:41.099 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:41.099 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:41.099 rmmod: ERROR: Module irdma is not currently loaded 00:02:41.099 rmmod: ERROR: Module i40iw is not currently loaded 00:02:41.099 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:41.099 + true 00:02:41.099 + for D in $DRIVERS 00:02:41.099 + sudo modprobe ice 00:02:41.099 + exit 0 00:02:41.109 [Pipeline] } 00:02:41.124 [Pipeline] // withEnv 00:02:41.130 [Pipeline] } 00:02:41.145 [Pipeline] // stage 00:02:41.155 [Pipeline] catchError 00:02:41.157 [Pipeline] { 00:02:41.171 [Pipeline] timeout 00:02:41.171 Timeout set to expire in 1 hr 0 min 00:02:41.173 [Pipeline] { 00:02:41.187 [Pipeline] stage 00:02:41.190 [Pipeline] { (Tests) 00:02:41.204 [Pipeline] sh 00:02:41.491 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.491 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.491 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.491 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:41.491 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.491 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.491 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:41.491 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.491 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.491 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.491 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:41.491 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.491 + source /etc/os-release 00:02:41.491 ++ NAME='Fedora Linux' 00:02:41.491 ++ VERSION='39 (Cloud Edition)' 00:02:41.491 ++ ID=fedora 00:02:41.491 ++ VERSION_ID=39 00:02:41.491 ++ VERSION_CODENAME= 00:02:41.491 ++ PLATFORM_ID=platform:f39 00:02:41.491 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:41.491 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:41.491 ++ LOGO=fedora-logo-icon 00:02:41.491 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:41.491 ++ HOME_URL=https://fedoraproject.org/ 00:02:41.491 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:41.491 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:41.491 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:41.491 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:41.491 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:41.491 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:41.491 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:41.491 ++ SUPPORT_END=2024-11-12 00:02:41.491 ++ VARIANT='Cloud Edition' 00:02:41.491 ++ VARIANT_ID=cloud 00:02:41.491 + uname -a 00:02:41.491 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:41.491 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:42.430 Hugepages 00:02:42.430 node hugesize free / total 00:02:42.430 node0 1048576kB 0 / 0 00:02:42.430 node0 2048kB 0 / 0 00:02:42.430 node1 1048576kB 0 / 0 00:02:42.430 node1 2048kB 0 / 0 00:02:42.430 00:02:42.430 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:42.430 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:42.430 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:42.430 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:42.430 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:42.689 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:42.689 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:42.689 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:42.689 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:42.689 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:42.689 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:42.689 + rm -f /tmp/spdk-ld-path 00:02:42.689 + source autorun-spdk.conf 00:02:42.689 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.689 ++ SPDK_TEST_NVMF=1 00:02:42.689 ++ SPDK_TEST_NVME_CLI=1 00:02:42.689 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.689 ++ SPDK_TEST_NVMF_NICS=e810 00:02:42.689 ++ SPDK_TEST_VFIOUSER=1 00:02:42.689 ++ SPDK_RUN_UBSAN=1 00:02:42.689 ++ NET_TYPE=phy 00:02:42.689 ++ RUN_NIGHTLY=0 00:02:42.689 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:42.689 + [[ -n '' ]] 00:02:42.689 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.689 + for M in /var/spdk/build-*-manifest.txt 00:02:42.689 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:42.689 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:42.689 + for M in /var/spdk/build-*-manifest.txt 00:02:42.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:42.689 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:42.689 + for M in /var/spdk/build-*-manifest.txt 00:02:42.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:42.689 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:42.689 ++ uname 00:02:42.689 + [[ Linux == \L\i\n\u\x ]] 00:02:42.689 + sudo dmesg -T 00:02:42.689 + sudo dmesg --clear 00:02:42.689 + dmesg_pid=4059037 00:02:42.689 + [[ Fedora Linux == FreeBSD ]] 00:02:42.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.689 + sudo dmesg -Tw 00:02:42.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:42.689 + [[ -x /usr/src/fio-static/fio ]] 00:02:42.689 + export FIO_BIN=/usr/src/fio-static/fio 00:02:42.689 + FIO_BIN=/usr/src/fio-static/fio 00:02:42.689 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:42.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:42.689 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:42.689 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:42.689 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:42.689 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:42.689 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:42.689 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:42.689 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:42.689 22:33:50 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:42.689 22:33:50 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:42.689 22:33:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:42.689 22:33:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:42.689 22:33:50 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:42.950 22:33:50 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:42.950 22:33:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:42.950 22:33:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:42.950 22:33:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:42.950 22:33:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.950 22:33:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.950 22:33:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.950 22:33:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.950 22:33:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.950 22:33:50 -- paths/export.sh@5 -- $ export PATH 00:02:42.950 22:33:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.950 22:33:50 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:42.950 22:33:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:42.950 22:33:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733866430.XXXXXX 00:02:42.950 22:33:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733866430.jnuz2i 00:02:42.950 22:33:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:42.950 22:33:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:42.950 22:33:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:42.950 22:33:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:42.950 22:33:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:42.950 22:33:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:42.950 22:33:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:42.950 22:33:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.950 22:33:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:42.950 22:33:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:42.950 22:33:50 -- pm/common@17 -- $ local monitor 00:02:42.950 22:33:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.950 22:33:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.950 22:33:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.950 22:33:50 -- pm/common@21 -- $ date +%s 00:02:42.950 22:33:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.950 22:33:50 -- pm/common@21 -- $ date +%s 00:02:42.950 22:33:50 -- pm/common@25 -- $ sleep 1 00:02:42.950 22:33:50 -- pm/common@21 -- $ date +%s 00:02:42.950 22:33:50 -- pm/common@21 -- $ date +%s 00:02:42.950 22:33:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866430 00:02:42.950 22:33:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866430 00:02:42.950 22:33:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866430 00:02:42.950 22:33:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866430 00:02:42.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866430_collect-vmstat.pm.log 00:02:42.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866430_collect-cpu-load.pm.log 00:02:42.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866430_collect-cpu-temp.pm.log 00:02:42.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866430_collect-bmc-pm.bmc.pm.log 00:02:43.888 22:33:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:43.888 22:33:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:43.888 22:33:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:43.888 22:33:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.888 22:33:51 -- spdk/autobuild.sh@16 -- $ date -u 00:02:43.888 Tue Dec 10 09:33:51 PM UTC 2024 00:02:43.888 22:33:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:43.888 v25.01-pre-331-g2104eacf0 00:02:43.888 22:33:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:43.888 22:33:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:43.888 22:33:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:43.888 22:33:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:43.888 22:33:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:43.888 22:33:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.888 ************************************ 00:02:43.888 START TEST ubsan 00:02:43.888 ************************************ 00:02:43.888 22:33:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:43.888 using ubsan 00:02:43.888 00:02:43.888 real 0m0.000s 00:02:43.888 user 0m0.000s 00:02:43.888 sys 0m0.000s 00:02:43.888 22:33:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:43.888 22:33:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:43.888 ************************************ 00:02:43.888 END TEST ubsan 00:02:43.888 ************************************ 00:02:43.888 22:33:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:43.888 22:33:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:43.888 22:33:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:43.888 22:33:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:43.888 22:33:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:43.888 22:33:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:43.888 22:33:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:43.888 22:33:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:43.888 22:33:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:43.888 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:43.888 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:44.454 Using 'verbs' RDMA provider 00:02:55.025 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:05.016 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:05.016 Creating mk/config.mk...done. 00:03:05.016 Creating mk/cc.flags.mk...done. 00:03:05.016 Type 'make' to build. 00:03:05.016 22:34:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:05.016 22:34:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:05.016 22:34:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.016 22:34:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.016 ************************************ 00:03:05.016 START TEST make 00:03:05.016 ************************************ 00:03:05.016 22:34:12 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:06.953 The Meson build system 00:03:06.953 Version: 1.5.0 00:03:06.953 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:06.953 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:06.953 Build type: native build 00:03:06.953 Project name: libvfio-user 00:03:06.953 Project version: 0.0.1 00:03:06.953 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:06.953 C linker for the host machine: cc ld.bfd 2.40-14 00:03:06.953 Host machine cpu family: x86_64 00:03:06.953 Host machine cpu: x86_64 00:03:06.953 Run-time dependency threads found: YES 00:03:06.953 Library dl found: YES 00:03:06.953 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:06.953 Run-time dependency json-c found: YES 0.17 00:03:06.953 Run-time dependency cmocka found: YES 1.1.7 00:03:06.953 Program pytest-3 found: NO 00:03:06.953 Program flake8 found: NO 00:03:06.953 Program misspell-fixer found: NO 00:03:06.953 Program restructuredtext-lint found: NO 00:03:06.953 Program valgrind found: YES (/usr/bin/valgrind) 00:03:06.953 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.953 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.953 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.953 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.953 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:06.953 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:06.953 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.953 Build targets in project: 8 00:03:06.953 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:06.953 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:06.953 00:03:06.953 libvfio-user 0.0.1 00:03:06.953 00:03:06.953 User defined options 00:03:06.953 buildtype : debug 00:03:06.953 default_library: shared 00:03:06.953 libdir : /usr/local/lib 00:03:06.953 00:03:06.953 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.922 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.182 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:08.182 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:08.182 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:08.182 [4/37] Compiling C object samples/null.p/null.c.o 00:03:08.182 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:08.182 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:08.182 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:08.182 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:08.182 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:08.182 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:08.182 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:08.182 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:08.182 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:08.182 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:08.182 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:08.182 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:08.182 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:08.182 [18/37] Compiling C object samples/server.p/server.c.o 00:03:08.182 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:08.182 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:08.182 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:08.182 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:08.182 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.182 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.182 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.182 [26/37] Compiling C object samples/client.p/client.c.o 00:03:08.442 [27/37] Linking target samples/client 00:03:08.442 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.442 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:08.442 [30/37] Linking target test/unit_tests 00:03:08.443 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:08.704 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:08.704 [33/37] Linking target samples/null 00:03:08.704 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:08.704 [35/37] Linking target samples/server 00:03:08.704 [36/37] Linking target samples/gpio-pci-idio-16 00:03:08.704 [37/37] Linking target samples/lspci 00:03:08.704 INFO: autodetecting backend as ninja 00:03:08.704 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.965 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.906 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:09.906 ninja: no work to do. 00:03:14.096 The Meson build system 00:03:14.096 Version: 1.5.0 00:03:14.096 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:14.096 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:14.096 Build type: native build 00:03:14.096 Program cat found: YES (/usr/bin/cat) 00:03:14.096 Project name: DPDK 00:03:14.096 Project version: 24.03.0 00:03:14.096 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:14.096 C linker for the host machine: cc ld.bfd 2.40-14 00:03:14.096 Host machine cpu family: x86_64 00:03:14.096 Host machine cpu: x86_64 00:03:14.096 Message: ## Building in Developer Mode ## 00:03:14.096 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:14.096 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:14.096 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:14.096 Program python3 found: YES (/usr/bin/python3) 00:03:14.096 Program cat found: YES (/usr/bin/cat) 00:03:14.096 Compiler for C supports arguments -march=native: YES 00:03:14.096 Checking for size of "void *" : 8 00:03:14.096 Checking for size of "void *" : 8 (cached) 00:03:14.096 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:14.096 Library m found: YES 00:03:14.096 Library numa found: YES 00:03:14.096 Has header "numaif.h" : YES 00:03:14.096 Library fdt found: NO 00:03:14.096 Library execinfo found: NO 00:03:14.096 Has header "execinfo.h" : YES 00:03:14.096 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:14.096 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:14.096 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:14.096 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:14.096 Run-time dependency openssl found: YES 3.1.1 00:03:14.096 Run-time dependency libpcap found: YES 1.10.4 00:03:14.096 Has header "pcap.h" with dependency libpcap: YES 00:03:14.096 Compiler for C supports arguments -Wcast-qual: YES 00:03:14.096 Compiler for C supports arguments -Wdeprecated: YES 00:03:14.096 Compiler for C supports arguments -Wformat: YES 00:03:14.096 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:14.096 Compiler for C supports arguments -Wformat-security: NO 00:03:14.096 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:14.096 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:14.096 Compiler for C supports arguments -Wnested-externs: YES 00:03:14.096 Compiler for C supports arguments -Wold-style-definition: YES 00:03:14.096 Compiler for C supports arguments -Wpointer-arith: YES 00:03:14.096 Compiler for C supports arguments -Wsign-compare: YES 00:03:14.096 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:14.096 Compiler for C supports arguments -Wundef: YES 00:03:14.096 Compiler for C supports arguments -Wwrite-strings: YES 00:03:14.096 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:14.096 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:14.096 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:14.096 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:14.096 Program objdump found: YES (/usr/bin/objdump) 00:03:14.096 Compiler for C supports arguments -mavx512f: YES 00:03:14.096 Checking if "AVX512 checking" compiles: YES 00:03:14.096 Fetching value of define "__SSE4_2__" : 1 00:03:14.096 Fetching value of define "__AES__" : 1 00:03:14.096 Fetching value of define "__AVX__" : 1 00:03:14.096 Fetching value of define "__AVX2__" : (undefined) 00:03:14.096 Fetching value of define "__AVX512BW__" : (undefined) 00:03:14.096 Fetching value of define "__AVX512CD__" : (undefined) 00:03:14.096 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:14.096 Fetching value of define "__AVX512F__" : (undefined) 00:03:14.096 Fetching value of define "__AVX512VL__" : (undefined) 00:03:14.096 Fetching value of define "__PCLMUL__" : 1 00:03:14.096 Fetching value of define "__RDRND__" : 1 00:03:14.096 Fetching value of define "__RDSEED__" : (undefined) 00:03:14.096 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:14.096 Fetching value of define "__znver1__" : (undefined) 00:03:14.096 Fetching value of define "__znver2__" : (undefined) 00:03:14.096 Fetching value of define "__znver3__" : (undefined) 00:03:14.096 Fetching value of define "__znver4__" : (undefined) 00:03:14.096 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:14.096 Message: lib/log: Defining dependency "log" 00:03:14.096 Message: lib/kvargs: Defining dependency "kvargs" 00:03:14.096 Message: lib/telemetry: Defining dependency "telemetry" 00:03:14.096 Checking for function "getentropy" : NO 00:03:14.096 Message: lib/eal: Defining dependency "eal" 00:03:14.096 Message: lib/ring: Defining dependency "ring" 00:03:14.096 Message: lib/rcu: Defining dependency "rcu" 00:03:14.096 Message: lib/mempool: Defining dependency "mempool" 00:03:14.096 Message: lib/mbuf: Defining dependency "mbuf" 00:03:14.096 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:14.096 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:14.097 Compiler for C supports arguments -mpclmul: YES 00:03:14.097 Compiler for C supports arguments -maes: YES 00:03:14.097 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:14.097 Compiler for C supports arguments -mavx512bw: YES 00:03:14.097 Compiler for C supports arguments -mavx512dq: YES 00:03:14.097 Compiler for C supports arguments -mavx512vl: YES 00:03:14.097 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:14.097 Compiler for C supports arguments -mavx2: YES 00:03:14.097 Compiler for C supports arguments -mavx: YES 00:03:14.097 Message: lib/net: Defining dependency "net" 00:03:14.097 Message: lib/meter: Defining dependency "meter" 00:03:14.097 Message: lib/ethdev: Defining dependency "ethdev" 00:03:14.097 Message: lib/pci: Defining dependency "pci" 00:03:14.097 Message: lib/cmdline: Defining dependency "cmdline" 00:03:14.097 Message: lib/hash: Defining dependency "hash" 00:03:14.097 Message: lib/timer: Defining dependency "timer" 00:03:14.097 Message: lib/compressdev: Defining dependency "compressdev" 00:03:14.097 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:14.097 Message: lib/dmadev: Defining dependency "dmadev" 00:03:14.097 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:14.097 Message: lib/power: Defining dependency "power" 00:03:14.097 Message: lib/reorder: Defining dependency "reorder" 00:03:14.097 Message: lib/security: Defining dependency "security" 00:03:14.097 Has header "linux/userfaultfd.h" : YES 00:03:14.097 Has header "linux/vduse.h" : YES 00:03:14.097 Message: lib/vhost: Defining dependency "vhost" 00:03:14.097 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:14.097 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:14.097 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:14.097 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:14.097 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:14.097 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:14.097 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:14.097 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:14.097 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:14.097 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:14.097 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:14.097 Configuring doxy-api-html.conf using configuration 00:03:14.097 Configuring doxy-api-man.conf using configuration 00:03:14.097 Program mandb found: YES (/usr/bin/mandb) 00:03:14.097 Program sphinx-build found: NO 00:03:14.097 Configuring rte_build_config.h using configuration 00:03:14.097 Message: 00:03:14.097 ================= 00:03:14.097 Applications Enabled 00:03:14.097 ================= 00:03:14.097 00:03:14.097 apps: 00:03:14.097 00:03:14.097 00:03:14.097 Message: 00:03:14.097 ================= 00:03:14.097 Libraries Enabled 00:03:14.097 ================= 00:03:14.097 00:03:14.097 libs: 00:03:14.097 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:14.097 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:14.097 cryptodev, dmadev, power, reorder, security, vhost, 00:03:14.097 00:03:14.097 Message: 00:03:14.097 =============== 00:03:14.097 Drivers Enabled 00:03:14.097 =============== 00:03:14.097 00:03:14.097 common: 00:03:14.097 00:03:14.097 bus: 00:03:14.097 pci, vdev, 00:03:14.097 mempool: 00:03:14.097 ring, 00:03:14.097 dma: 00:03:14.097 00:03:14.097 net: 00:03:14.097 00:03:14.097 crypto: 00:03:14.097 00:03:14.097 compress: 00:03:14.097 00:03:14.097 vdpa: 00:03:14.097 00:03:14.097 00:03:14.097 Message: 00:03:14.097 ================= 00:03:14.097 Content Skipped 00:03:14.097 ================= 00:03:14.097 00:03:14.097 apps: 00:03:14.097 dumpcap: explicitly disabled via build config 00:03:14.097 graph: explicitly disabled via build config 00:03:14.097 pdump: explicitly disabled via build config 00:03:14.097 proc-info: explicitly disabled via build config 00:03:14.097 test-acl: explicitly disabled via build config 00:03:14.097 test-bbdev: explicitly disabled via build config 00:03:14.097 test-cmdline: explicitly disabled via build config 00:03:14.097 test-compress-perf: explicitly disabled via build config 00:03:14.097 test-crypto-perf: explicitly disabled via build config 00:03:14.097 test-dma-perf: explicitly disabled via build config 00:03:14.097 test-eventdev: explicitly disabled via build config 00:03:14.097 test-fib: explicitly disabled via build config 00:03:14.097 test-flow-perf: explicitly disabled via build config 00:03:14.097 test-gpudev: explicitly disabled via build config 00:03:14.097 test-mldev: explicitly disabled via build config 00:03:14.097 test-pipeline: explicitly disabled via build config 00:03:14.097 test-pmd: explicitly disabled via build config 00:03:14.097 test-regex: explicitly disabled via build config 00:03:14.097 test-sad: explicitly disabled via build config 00:03:14.097 test-security-perf: explicitly disabled via build config 00:03:14.097 00:03:14.097 libs: 00:03:14.097 argparse: explicitly disabled via build config 00:03:14.097 metrics: explicitly disabled via build config 00:03:14.097 acl: explicitly disabled via build config 00:03:14.097 bbdev: explicitly disabled via build config 00:03:14.097 bitratestats: explicitly disabled via build config 00:03:14.097 bpf: explicitly disabled via build config 00:03:14.097 cfgfile: explicitly disabled via build config 00:03:14.097 distributor: explicitly disabled via build config 00:03:14.097 efd: explicitly disabled via build config 00:03:14.097 eventdev: explicitly disabled via build config 00:03:14.097 dispatcher: explicitly disabled via build config 00:03:14.097 gpudev: explicitly disabled via build config 00:03:14.097 gro: explicitly disabled via build config 00:03:14.097 gso: explicitly disabled via build config 00:03:14.097 ip_frag: explicitly disabled via build config 00:03:14.097 jobstats: explicitly disabled via build config 00:03:14.097 latencystats: explicitly disabled via build config 00:03:14.097 lpm: explicitly disabled via build config 00:03:14.097 member: explicitly disabled via build config 00:03:14.097 pcapng: explicitly disabled via build config 00:03:14.097 rawdev: explicitly disabled via build config 00:03:14.097 regexdev: explicitly disabled via build config 00:03:14.097 mldev: explicitly disabled via build config 00:03:14.097 rib: explicitly disabled via build config 00:03:14.097 sched: explicitly disabled via build config 00:03:14.097 stack: explicitly disabled via build config 00:03:14.097 ipsec: explicitly disabled via build config 00:03:14.097 pdcp: explicitly disabled via build config 00:03:14.097 fib: explicitly disabled via build config 00:03:14.097 port: explicitly disabled via build config 00:03:14.097 pdump: explicitly disabled via build config 00:03:14.097 table: explicitly disabled via build config 00:03:14.097 pipeline: explicitly disabled via build config 00:03:14.097 graph: explicitly disabled via build config 00:03:14.097 node: explicitly disabled via build config 00:03:14.097 00:03:14.097 drivers: 00:03:14.097 common/cpt: not in enabled drivers build config 00:03:14.097 common/dpaax: not in enabled drivers build config 00:03:14.097 common/iavf: not in enabled drivers build config 00:03:14.097 common/idpf: not in enabled drivers build config 00:03:14.097 common/ionic: not in enabled drivers build config 00:03:14.097 common/mvep: not in enabled drivers build config 00:03:14.097 common/octeontx: not in enabled drivers build config 00:03:14.097 bus/auxiliary: not in enabled drivers build config 00:03:14.097 bus/cdx: not in enabled drivers build config 00:03:14.097 bus/dpaa: not in enabled drivers build config 00:03:14.097 bus/fslmc: not in enabled drivers build config 00:03:14.097 bus/ifpga: not in enabled drivers build config 00:03:14.097 bus/platform: not in enabled drivers build config 00:03:14.097 bus/uacce: not in enabled drivers build config 00:03:14.097 bus/vmbus: not in enabled drivers build config 00:03:14.097 common/cnxk: not in enabled drivers build config 00:03:14.097 common/mlx5: not in enabled drivers build config 00:03:14.097 common/nfp: not in enabled drivers build config 00:03:14.097 common/nitrox: not in enabled drivers build config 00:03:14.097 common/qat: not in enabled drivers build config 00:03:14.097 common/sfc_efx: not in enabled drivers build config 00:03:14.097 mempool/bucket: not in enabled drivers build config 00:03:14.097 mempool/cnxk: not in enabled drivers build config 00:03:14.097 mempool/dpaa: not in enabled drivers build config 00:03:14.097 mempool/dpaa2: not in enabled drivers build config 00:03:14.097 mempool/octeontx: not in enabled drivers build config 00:03:14.097 mempool/stack: not in enabled drivers build config 00:03:14.097 dma/cnxk: not in enabled drivers build config 00:03:14.097 dma/dpaa: not in enabled drivers build config 00:03:14.097 dma/dpaa2: not in enabled drivers build config 00:03:14.097 dma/hisilicon: not in enabled drivers build config 00:03:14.097 dma/idxd: not in enabled drivers build config 00:03:14.097 dma/ioat: not in enabled drivers build config 00:03:14.097 dma/skeleton: not in enabled drivers build config 00:03:14.097 net/af_packet: not in enabled drivers build config 00:03:14.097 net/af_xdp: not in enabled drivers build config 00:03:14.097 net/ark: not in enabled drivers build config 00:03:14.097 net/atlantic: not in enabled drivers build config 00:03:14.097 net/avp: not in enabled drivers build config 00:03:14.097 net/axgbe: not in enabled drivers build config 00:03:14.097 net/bnx2x: not in enabled drivers build config 00:03:14.097 net/bnxt: not in enabled drivers build config 00:03:14.097 net/bonding: not in enabled drivers build config 00:03:14.097 net/cnxk: not in enabled drivers build config 00:03:14.097 net/cpfl: not in enabled drivers build config 00:03:14.097 net/cxgbe: not in enabled drivers build config 00:03:14.097 net/dpaa: not in enabled drivers build config 00:03:14.097 net/dpaa2: not in enabled drivers build config 00:03:14.097 net/e1000: not in enabled drivers build config 00:03:14.097 net/ena: not in enabled drivers build config 00:03:14.097 net/enetc: not in enabled drivers build config 00:03:14.097 net/enetfec: not in enabled drivers build config 00:03:14.097 net/enic: not in enabled drivers build config 00:03:14.097 net/failsafe: not in enabled drivers build config 00:03:14.097 net/fm10k: not in enabled drivers build config 00:03:14.097 net/gve: not in enabled drivers build config 00:03:14.097 net/hinic: not in enabled drivers build config 00:03:14.097 net/hns3: not in enabled drivers build config 00:03:14.097 net/i40e: not in enabled drivers build config 00:03:14.097 net/iavf: not in enabled drivers build config 00:03:14.097 net/ice: not in enabled drivers build config 00:03:14.097 net/idpf: not in enabled drivers build config 00:03:14.097 net/igc: not in enabled drivers build config 00:03:14.098 net/ionic: not in enabled drivers build config 00:03:14.098 net/ipn3ke: not in enabled drivers build config 00:03:14.098 net/ixgbe: not in enabled drivers build config 00:03:14.098 net/mana: not in enabled drivers build config 00:03:14.098 net/memif: not in enabled drivers build config 00:03:14.098 net/mlx4: not in enabled drivers build config 00:03:14.098 net/mlx5: not in enabled drivers build config 00:03:14.098 net/mvneta: not in enabled drivers build config 00:03:14.098 net/mvpp2: not in enabled drivers build config 00:03:14.098 net/netvsc: not in enabled drivers build config 00:03:14.098 net/nfb: not in enabled drivers build config 00:03:14.098 net/nfp: not in enabled drivers build config 00:03:14.098 net/ngbe: not in enabled drivers build config 00:03:14.098 net/null: not in enabled drivers build config 00:03:14.098 net/octeontx: not in enabled drivers build config 00:03:14.098 net/octeon_ep: not in enabled drivers build config 00:03:14.098 net/pcap: not in enabled drivers build config 00:03:14.098 net/pfe: not in enabled drivers build config 00:03:14.098 net/qede: not in enabled drivers build config 00:03:14.098 net/ring: not in enabled drivers build config 00:03:14.098 net/sfc: not in enabled drivers build config 00:03:14.098 net/softnic: not in enabled drivers build config 00:03:14.098 net/tap: not in enabled drivers build config 00:03:14.098 net/thunderx: not in enabled drivers build config 00:03:14.098 net/txgbe: not in enabled drivers build config 00:03:14.098 net/vdev_netvsc: not in enabled drivers build config 00:03:14.098 net/vhost: not in enabled drivers build config 00:03:14.098 net/virtio: not in enabled drivers build config 00:03:14.098 net/vmxnet3: not in enabled drivers build config 00:03:14.098 raw/*: missing internal dependency, "rawdev" 00:03:14.098 crypto/armv8: not in enabled drivers build config 00:03:14.098 crypto/bcmfs: not in enabled drivers build config 00:03:14.098 crypto/caam_jr: not in enabled drivers build config 00:03:14.098 crypto/ccp: not in enabled drivers build config 00:03:14.098 crypto/cnxk: not in enabled drivers build config 00:03:14.098 crypto/dpaa_sec: not in enabled drivers build config 00:03:14.098 crypto/dpaa2_sec: not in enabled drivers build config 00:03:14.098 crypto/ipsec_mb: not in enabled drivers build config 00:03:14.098 crypto/mlx5: not in enabled drivers build config 00:03:14.098 crypto/mvsam: not in enabled drivers build config 00:03:14.098 crypto/nitrox: not in enabled drivers build config 00:03:14.098 crypto/null: not in enabled drivers build config 00:03:14.098 crypto/octeontx: not in enabled drivers build config 00:03:14.098 crypto/openssl: not in enabled drivers build config 00:03:14.098 crypto/scheduler: not in enabled drivers build config 00:03:14.098 crypto/uadk: not in enabled drivers build config 00:03:14.098 crypto/virtio: not in enabled drivers build config 00:03:14.098 compress/isal: not in enabled drivers build config 00:03:14.098 compress/mlx5: not in enabled drivers build config 00:03:14.098 compress/nitrox: not in enabled drivers build config 00:03:14.098 compress/octeontx: not in enabled drivers build config 00:03:14.098 compress/zlib: not in enabled drivers build config 00:03:14.098 regex/*: missing internal dependency, "regexdev" 00:03:14.098 ml/*: missing internal dependency, "mldev" 00:03:14.098 vdpa/ifc: not in enabled drivers build config 00:03:14.098 vdpa/mlx5: not in enabled drivers build config 00:03:14.098 vdpa/nfp: not in enabled drivers build config 00:03:14.098 vdpa/sfc: not in enabled drivers build config 00:03:14.098 event/*: missing internal dependency, "eventdev" 00:03:14.098 baseband/*: missing internal dependency, "bbdev" 00:03:14.098 gpu/*: missing internal dependency, "gpudev" 00:03:14.098 00:03:14.098 00:03:14.665 Build targets in project: 85 00:03:14.665 00:03:14.665 DPDK 24.03.0 00:03:14.665 00:03:14.665 User defined options 00:03:14.665 buildtype : debug 00:03:14.665 default_library : shared 00:03:14.665 libdir : lib 00:03:14.665 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:14.665 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:14.665 c_link_args : 00:03:14.665 cpu_instruction_set: native 00:03:14.665 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:14.665 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:14.665 enable_docs : false 00:03:14.665 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:14.665 enable_kmods : false 00:03:14.665 max_lcores : 128 00:03:14.665 tests : false 00:03:14.665 00:03:14.665 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.929 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:15.192 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:15.192 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:15.192 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:15.192 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:15.192 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:15.192 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:15.192 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:15.192 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:15.192 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:15.192 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:15.192 [11/268] Linking static target lib/librte_kvargs.a 00:03:15.192 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:15.192 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:15.192 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:15.192 [15/268] Linking static target lib/librte_log.a 00:03:15.192 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:15.760 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.023 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:16.023 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:16.023 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:16.023 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:16.023 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:16.023 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:16.023 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:16.023 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:16.023 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:16.023 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:16.023 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:16.023 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:16.023 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:16.023 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:16.023 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:16.023 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:16.023 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:16.023 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:16.023 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:16.023 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:16.023 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:16.023 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:16.023 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:16.023 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:16.023 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:16.023 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:16.023 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:16.023 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:16.023 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:16.023 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:16.023 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:16.023 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:16.023 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:16.023 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:16.023 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:16.023 [53/268] Linking static target lib/librte_telemetry.a 00:03:16.023 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:16.023 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:16.023 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:16.284 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:16.284 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:16.284 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:16.284 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:16.284 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:16.284 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:16.284 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.284 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:16.284 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:16.548 [66/268] Linking target lib/librte_log.so.24.1 00:03:16.548 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:16.548 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:16.548 [69/268] Linking static target lib/librte_pci.a 00:03:16.548 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:16.810 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:16.810 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:16.810 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:16.810 [74/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:16.810 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:16.810 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:16.810 [77/268] Linking target lib/librte_kvargs.so.24.1 00:03:16.810 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:16.810 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:17.072 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:17.073 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:17.073 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:17.073 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:17.073 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:17.073 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:17.073 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:17.073 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:17.073 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:17.073 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:17.073 [90/268] Linking static target lib/librte_ring.a 00:03:17.073 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:17.073 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:17.073 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:17.073 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:17.073 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:17.073 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:17.073 [97/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.073 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:17.073 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:17.073 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:17.073 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:17.073 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:17.073 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:17.073 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:17.073 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:17.073 [106/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:17.345 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:17.345 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:17.345 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:17.345 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:17.345 [111/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.345 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.345 [113/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:17.345 [114/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:17.345 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.345 [116/268] Linking static target lib/librte_eal.a 00:03:17.345 [117/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:17.345 [118/268] Linking static target lib/librte_meter.a 00:03:17.345 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:17.345 [120/268] Linking static target lib/librte_mempool.a 00:03:17.345 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:17.345 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:17.345 [123/268] Linking target lib/librte_telemetry.so.24.1 00:03:17.345 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:17.345 [125/268] Linking static target lib/librte_rcu.a 00:03:17.345 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:17.345 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:17.609 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:17.609 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:17.609 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:17.609 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:17.609 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:17.609 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:17.609 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:17.609 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:17.609 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:17.609 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.609 [138/268] Linking static target lib/librte_net.a 00:03:17.609 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.889 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.889 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.889 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.889 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.889 [144/268] Linking static target lib/librte_cmdline.a 00:03:17.889 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:17.889 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.889 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:18.181 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.181 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:18.181 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.181 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:18.181 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:18.181 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:18.181 [154/268] Linking static target lib/librte_timer.a 00:03:18.181 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:18.181 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:18.181 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:18.181 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:18.181 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:18.181 [160/268] Linking static target lib/librte_dmadev.a 00:03:18.181 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.441 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:18.442 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:18.442 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:18.442 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:18.442 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.442 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:18.442 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:18.442 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:18.442 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:18.442 [171/268] Linking static target lib/librte_power.a 00:03:18.442 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:18.442 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:18.442 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.701 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.701 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:18.701 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.701 [178/268] Linking static target lib/librte_compressdev.a 00:03:18.701 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:18.701 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.701 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:18.701 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:18.701 [183/268] Linking static target lib/librte_hash.a 00:03:18.701 [184/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:18.701 [185/268] Linking static target lib/librte_mbuf.a 00:03:18.701 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.701 [187/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.701 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:18.701 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.701 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:18.701 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:18.701 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:18.959 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.959 [194/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.959 [195/268] Linking static target lib/librte_reorder.a 00:03:18.959 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:18.959 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:18.959 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:18.959 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.959 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.959 [201/268] Linking static target drivers/librte_bus_vdev.a 00:03:18.959 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:18.959 [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.959 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.959 [205/268] Linking static target drivers/librte_bus_pci.a 00:03:18.959 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.959 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:18.959 [208/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:19.219 [209/268] Linking static target lib/librte_security.a 00:03:19.219 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.219 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.219 [212/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.219 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:19.219 [214/268] Linking static target lib/librte_ethdev.a 00:03:19.219 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.219 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.219 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.219 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.478 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.478 [220/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:19.478 [221/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.478 [222/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.478 [223/268] Linking static target drivers/librte_mempool_ring.a 00:03:19.478 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.478 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:19.478 [226/268] Linking static target lib/librte_cryptodev.a 00:03:20.858 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.793 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:23.695 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.695 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.695 [231/268] Linking target lib/librte_eal.so.24.1 00:03:23.695 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:23.695 [233/268] Linking target lib/librte_ring.so.24.1 00:03:23.695 [234/268] Linking target lib/librte_pci.so.24.1 00:03:23.695 [235/268] Linking target lib/librte_meter.so.24.1 00:03:23.695 [236/268] Linking target lib/librte_timer.so.24.1 00:03:23.695 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:23.695 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:23.953 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:23.954 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:23.954 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:23.954 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:23.954 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:23.954 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:23.954 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:23.954 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:23.954 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.954 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.954 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.954 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:24.212 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:24.212 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:24.212 [253/268] Linking target lib/librte_net.so.24.1 00:03:24.212 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:24.212 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:24.212 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:24.212 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:24.470 [258/268] Linking target lib/librte_security.so.24.1 00:03:24.470 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:24.470 [260/268] Linking target lib/librte_hash.so.24.1 00:03:24.470 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:24.470 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:24.470 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:24.470 [264/268] Linking target lib/librte_power.so.24.1 00:03:27.751 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:27.751 [266/268] Linking static target lib/librte_vhost.a 00:03:29.126 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.126 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:29.126 INFO: autodetecting backend as ninja 00:03:29.126 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:51.055 CC lib/ut/ut.o 00:03:51.055 CC lib/log/log.o 00:03:51.055 CC lib/log/log_flags.o 00:03:51.055 CC lib/log/log_deprecated.o 00:03:51.055 CC lib/ut_mock/mock.o 00:03:51.055 LIB libspdk_ut.a 00:03:51.055 LIB libspdk_ut_mock.a 00:03:51.055 LIB libspdk_log.a 00:03:51.055 SO libspdk_ut.so.2.0 00:03:51.055 SO libspdk_ut_mock.so.6.0 00:03:51.055 SO libspdk_log.so.7.1 00:03:51.055 SYMLINK libspdk_ut.so 00:03:51.055 SYMLINK libspdk_ut_mock.so 00:03:51.055 SYMLINK libspdk_log.so 00:03:51.055 CXX lib/trace_parser/trace.o 00:03:51.055 CC lib/util/base64.o 00:03:51.055 CC lib/dma/dma.o 00:03:51.055 CC lib/util/bit_array.o 00:03:51.055 CC lib/ioat/ioat.o 00:03:51.055 CC lib/util/cpuset.o 00:03:51.055 CC lib/util/crc16.o 00:03:51.055 CC lib/util/crc32.o 00:03:51.055 CC lib/util/crc32c.o 00:03:51.055 CC lib/util/crc32_ieee.o 00:03:51.055 CC lib/util/crc64.o 00:03:51.055 CC lib/util/dif.o 00:03:51.055 CC lib/util/fd.o 00:03:51.055 CC lib/util/fd_group.o 00:03:51.055 CC lib/util/file.o 00:03:51.055 CC lib/util/hexlify.o 00:03:51.055 CC lib/util/iov.o 00:03:51.055 CC lib/util/math.o 00:03:51.055 CC lib/util/net.o 00:03:51.055 CC lib/util/pipe.o 00:03:51.055 CC lib/util/strerror_tls.o 00:03:51.055 CC lib/util/string.o 00:03:51.055 CC lib/util/uuid.o 00:03:51.055 CC lib/util/xor.o 00:03:51.055 CC lib/util/md5.o 00:03:51.055 CC lib/util/zipf.o 00:03:51.055 CC lib/vfio_user/host/vfio_user_pci.o 00:03:51.055 CC lib/vfio_user/host/vfio_user.o 00:03:51.055 LIB libspdk_dma.a 00:03:51.055 SO libspdk_dma.so.5.0 00:03:51.055 SYMLINK libspdk_dma.so 00:03:51.055 LIB libspdk_ioat.a 00:03:51.055 SO libspdk_ioat.so.7.0 00:03:51.055 SYMLINK libspdk_ioat.so 00:03:51.055 LIB libspdk_vfio_user.a 00:03:51.055 SO libspdk_vfio_user.so.5.0 00:03:51.055 SYMLINK libspdk_vfio_user.so 00:03:51.055 LIB libspdk_util.a 00:03:51.055 SO libspdk_util.so.10.1 00:03:51.055 SYMLINK libspdk_util.so 00:03:51.055 CC lib/conf/conf.o 00:03:51.055 CC lib/json/json_parse.o 00:03:51.055 CC lib/env_dpdk/env.o 00:03:51.055 CC lib/json/json_util.o 00:03:51.055 CC lib/vmd/vmd.o 00:03:51.055 CC lib/idxd/idxd.o 00:03:51.055 CC lib/env_dpdk/memory.o 00:03:51.055 CC lib/rdma_utils/rdma_utils.o 00:03:51.055 CC lib/vmd/led.o 00:03:51.055 CC lib/json/json_write.o 00:03:51.055 CC lib/idxd/idxd_user.o 00:03:51.055 CC lib/env_dpdk/pci.o 00:03:51.055 CC lib/idxd/idxd_kernel.o 00:03:51.055 CC lib/env_dpdk/init.o 00:03:51.055 CC lib/env_dpdk/threads.o 00:03:51.055 CC lib/env_dpdk/pci_ioat.o 00:03:51.055 CC lib/env_dpdk/pci_virtio.o 00:03:51.055 CC lib/env_dpdk/pci_vmd.o 00:03:51.055 CC lib/env_dpdk/pci_idxd.o 00:03:51.055 LIB libspdk_trace_parser.a 00:03:51.055 CC lib/env_dpdk/pci_event.o 00:03:51.055 CC lib/env_dpdk/sigbus_handler.o 00:03:51.055 CC lib/env_dpdk/pci_dpdk.o 00:03:51.055 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:51.055 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:51.055 SO libspdk_trace_parser.so.6.0 00:03:51.055 SYMLINK libspdk_trace_parser.so 00:03:51.055 LIB libspdk_conf.a 00:03:51.055 SO libspdk_conf.so.6.0 00:03:51.055 LIB libspdk_rdma_utils.a 00:03:51.055 SYMLINK libspdk_conf.so 00:03:51.055 SO libspdk_rdma_utils.so.1.0 00:03:51.055 LIB libspdk_json.a 00:03:51.055 SO libspdk_json.so.6.0 00:03:51.055 SYMLINK libspdk_rdma_utils.so 00:03:51.055 SYMLINK libspdk_json.so 00:03:51.055 CC lib/rdma_provider/common.o 00:03:51.055 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:51.055 CC lib/jsonrpc/jsonrpc_server.o 00:03:51.055 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:51.055 CC lib/jsonrpc/jsonrpc_client.o 00:03:51.055 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:51.055 LIB libspdk_idxd.a 00:03:51.055 SO libspdk_idxd.so.12.1 00:03:51.055 LIB libspdk_vmd.a 00:03:51.055 SYMLINK libspdk_idxd.so 00:03:51.055 SO libspdk_vmd.so.6.0 00:03:51.314 SYMLINK libspdk_vmd.so 00:03:51.314 LIB libspdk_rdma_provider.a 00:03:51.314 SO libspdk_rdma_provider.so.7.0 00:03:51.314 LIB libspdk_jsonrpc.a 00:03:51.314 SYMLINK libspdk_rdma_provider.so 00:03:51.314 SO libspdk_jsonrpc.so.6.0 00:03:51.314 SYMLINK libspdk_jsonrpc.so 00:03:51.579 CC lib/rpc/rpc.o 00:03:51.890 LIB libspdk_rpc.a 00:03:51.890 SO libspdk_rpc.so.6.0 00:03:51.890 SYMLINK libspdk_rpc.so 00:03:52.148 CC lib/notify/notify.o 00:03:52.148 CC lib/trace/trace.o 00:03:52.148 CC lib/notify/notify_rpc.o 00:03:52.148 CC lib/trace/trace_flags.o 00:03:52.148 CC lib/trace/trace_rpc.o 00:03:52.148 CC lib/keyring/keyring.o 00:03:52.148 CC lib/keyring/keyring_rpc.o 00:03:52.148 LIB libspdk_notify.a 00:03:52.148 SO libspdk_notify.so.6.0 00:03:52.148 SYMLINK libspdk_notify.so 00:03:52.405 LIB libspdk_keyring.a 00:03:52.406 LIB libspdk_trace.a 00:03:52.406 SO libspdk_keyring.so.2.0 00:03:52.406 SO libspdk_trace.so.11.0 00:03:52.406 SYMLINK libspdk_keyring.so 00:03:52.406 SYMLINK libspdk_trace.so 00:03:52.406 LIB libspdk_env_dpdk.a 00:03:52.663 SO libspdk_env_dpdk.so.15.1 00:03:52.663 CC lib/sock/sock.o 00:03:52.663 CC lib/thread/thread.o 00:03:52.663 CC lib/sock/sock_rpc.o 00:03:52.663 CC lib/thread/iobuf.o 00:03:52.663 SYMLINK libspdk_env_dpdk.so 00:03:52.922 LIB libspdk_sock.a 00:03:52.922 SO libspdk_sock.so.10.0 00:03:53.179 SYMLINK libspdk_sock.so 00:03:53.179 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.179 CC lib/nvme/nvme_ctrlr.o 00:03:53.179 CC lib/nvme/nvme_fabric.o 00:03:53.179 CC lib/nvme/nvme_ns_cmd.o 00:03:53.179 CC lib/nvme/nvme_ns.o 00:03:53.179 CC lib/nvme/nvme_pcie_common.o 00:03:53.179 CC lib/nvme/nvme_pcie.o 00:03:53.179 CC lib/nvme/nvme_qpair.o 00:03:53.179 CC lib/nvme/nvme.o 00:03:53.179 CC lib/nvme/nvme_quirks.o 00:03:53.179 CC lib/nvme/nvme_transport.o 00:03:53.179 CC lib/nvme/nvme_discovery.o 00:03:53.179 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:53.179 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:53.179 CC lib/nvme/nvme_tcp.o 00:03:53.179 CC lib/nvme/nvme_opal.o 00:03:53.179 CC lib/nvme/nvme_io_msg.o 00:03:53.179 CC lib/nvme/nvme_poll_group.o 00:03:53.179 CC lib/nvme/nvme_zns.o 00:03:53.179 CC lib/nvme/nvme_stubs.o 00:03:53.179 CC lib/nvme/nvme_auth.o 00:03:53.179 CC lib/nvme/nvme_cuse.o 00:03:53.179 CC lib/nvme/nvme_vfio_user.o 00:03:53.179 CC lib/nvme/nvme_rdma.o 00:03:54.114 LIB libspdk_thread.a 00:03:54.114 SO libspdk_thread.so.11.0 00:03:54.372 SYMLINK libspdk_thread.so 00:03:54.372 CC lib/init/json_config.o 00:03:54.372 CC lib/vfu_tgt/tgt_endpoint.o 00:03:54.372 CC lib/fsdev/fsdev.o 00:03:54.372 CC lib/init/subsystem.o 00:03:54.372 CC lib/fsdev/fsdev_io.o 00:03:54.372 CC lib/init/subsystem_rpc.o 00:03:54.372 CC lib/vfu_tgt/tgt_rpc.o 00:03:54.372 CC lib/accel/accel.o 00:03:54.372 CC lib/blob/blobstore.o 00:03:54.372 CC lib/fsdev/fsdev_rpc.o 00:03:54.372 CC lib/init/rpc.o 00:03:54.372 CC lib/virtio/virtio.o 00:03:54.372 CC lib/accel/accel_rpc.o 00:03:54.372 CC lib/virtio/virtio_vhost_user.o 00:03:54.372 CC lib/blob/request.o 00:03:54.372 CC lib/blob/zeroes.o 00:03:54.372 CC lib/virtio/virtio_vfio_user.o 00:03:54.372 CC lib/accel/accel_sw.o 00:03:54.372 CC lib/virtio/virtio_pci.o 00:03:54.372 CC lib/blob/blob_bs_dev.o 00:03:54.630 LIB libspdk_init.a 00:03:54.888 SO libspdk_init.so.6.0 00:03:54.888 SYMLINK libspdk_init.so 00:03:54.888 LIB libspdk_vfu_tgt.a 00:03:54.888 SO libspdk_vfu_tgt.so.3.0 00:03:54.888 LIB libspdk_virtio.a 00:03:54.888 SYMLINK libspdk_vfu_tgt.so 00:03:54.888 SO libspdk_virtio.so.7.0 00:03:54.888 SYMLINK libspdk_virtio.so 00:03:54.888 CC lib/event/app.o 00:03:54.888 CC lib/event/reactor.o 00:03:54.888 CC lib/event/log_rpc.o 00:03:54.888 CC lib/event/app_rpc.o 00:03:54.888 CC lib/event/scheduler_static.o 00:03:55.146 LIB libspdk_fsdev.a 00:03:55.146 SO libspdk_fsdev.so.2.0 00:03:55.146 SYMLINK libspdk_fsdev.so 00:03:55.404 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:55.404 LIB libspdk_event.a 00:03:55.404 SO libspdk_event.so.14.0 00:03:55.661 SYMLINK libspdk_event.so 00:03:55.661 LIB libspdk_accel.a 00:03:55.661 SO libspdk_accel.so.16.0 00:03:55.661 LIB libspdk_nvme.a 00:03:55.661 SYMLINK libspdk_accel.so 00:03:55.919 SO libspdk_nvme.so.15.0 00:03:55.919 CC lib/bdev/bdev.o 00:03:55.919 CC lib/bdev/bdev_rpc.o 00:03:55.919 CC lib/bdev/bdev_zone.o 00:03:55.919 CC lib/bdev/part.o 00:03:55.919 CC lib/bdev/scsi_nvme.o 00:03:56.177 LIB libspdk_fuse_dispatcher.a 00:03:56.177 SYMLINK libspdk_nvme.so 00:03:56.177 SO libspdk_fuse_dispatcher.so.1.0 00:03:56.177 SYMLINK libspdk_fuse_dispatcher.so 00:03:57.561 LIB libspdk_blob.a 00:03:57.820 SO libspdk_blob.so.12.0 00:03:57.820 SYMLINK libspdk_blob.so 00:03:58.078 CC lib/blobfs/blobfs.o 00:03:58.078 CC lib/blobfs/tree.o 00:03:58.078 CC lib/lvol/lvol.o 00:03:58.644 LIB libspdk_bdev.a 00:03:58.644 SO libspdk_bdev.so.17.0 00:03:58.644 LIB libspdk_blobfs.a 00:03:58.908 SO libspdk_blobfs.so.11.0 00:03:58.908 SYMLINK libspdk_bdev.so 00:03:58.908 SYMLINK libspdk_blobfs.so 00:03:58.908 LIB libspdk_lvol.a 00:03:58.908 SO libspdk_lvol.so.11.0 00:03:58.908 SYMLINK libspdk_lvol.so 00:03:58.908 CC lib/scsi/dev.o 00:03:58.908 CC lib/nbd/nbd.o 00:03:58.908 CC lib/ublk/ublk.o 00:03:58.908 CC lib/scsi/lun.o 00:03:58.908 CC lib/nvmf/ctrlr.o 00:03:58.908 CC lib/scsi/port.o 00:03:58.908 CC lib/nbd/nbd_rpc.o 00:03:58.908 CC lib/ublk/ublk_rpc.o 00:03:58.908 CC lib/nvmf/ctrlr_discovery.o 00:03:58.908 CC lib/ftl/ftl_core.o 00:03:58.908 CC lib/nvmf/ctrlr_bdev.o 00:03:58.908 CC lib/scsi/scsi.o 00:03:58.908 CC lib/ftl/ftl_init.o 00:03:58.908 CC lib/nvmf/subsystem.o 00:03:58.908 CC lib/scsi/scsi_bdev.o 00:03:58.908 CC lib/ftl/ftl_layout.o 00:03:58.908 CC lib/nvmf/nvmf.o 00:03:58.908 CC lib/scsi/scsi_pr.o 00:03:58.908 CC lib/ftl/ftl_debug.o 00:03:58.908 CC lib/scsi/scsi_rpc.o 00:03:58.908 CC lib/nvmf/nvmf_rpc.o 00:03:58.908 CC lib/ftl/ftl_io.o 00:03:58.908 CC lib/nvmf/transport.o 00:03:58.908 CC lib/ftl/ftl_sb.o 00:03:58.908 CC lib/scsi/task.o 00:03:58.908 CC lib/ftl/ftl_l2p.o 00:03:58.908 CC lib/nvmf/tcp.o 00:03:58.908 CC lib/nvmf/stubs.o 00:03:58.908 CC lib/ftl/ftl_l2p_flat.o 00:03:58.908 CC lib/ftl/ftl_nv_cache.o 00:03:58.908 CC lib/ftl/ftl_band.o 00:03:58.908 CC lib/nvmf/mdns_server.o 00:03:58.908 CC lib/nvmf/vfio_user.o 00:03:58.908 CC lib/nvmf/rdma.o 00:03:58.908 CC lib/ftl/ftl_band_ops.o 00:03:58.908 CC lib/ftl/ftl_writer.o 00:03:58.908 CC lib/nvmf/auth.o 00:03:58.908 CC lib/ftl/ftl_rq.o 00:03:58.908 CC lib/ftl/ftl_reloc.o 00:03:58.908 CC lib/ftl/ftl_l2p_cache.o 00:03:58.908 CC lib/ftl/ftl_p2l.o 00:03:58.908 CC lib/ftl/ftl_p2l_log.o 00:03:58.908 CC lib/ftl/mngt/ftl_mngt.o 00:03:58.908 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:58.908 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.908 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.908 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.908 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:59.482 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:59.482 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:59.482 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:59.482 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:59.482 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:59.482 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:59.482 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:59.482 CC lib/ftl/utils/ftl_conf.o 00:03:59.482 CC lib/ftl/utils/ftl_md.o 00:03:59.482 CC lib/ftl/utils/ftl_mempool.o 00:03:59.482 CC lib/ftl/utils/ftl_bitmap.o 00:03:59.482 CC lib/ftl/utils/ftl_property.o 00:03:59.482 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:59.482 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:59.482 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:59.482 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:59.482 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.482 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.744 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:59.744 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.744 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.744 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.744 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.744 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:59.744 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:59.744 CC lib/ftl/base/ftl_base_dev.o 00:03:59.744 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.744 CC lib/ftl/ftl_trace.o 00:04:00.003 LIB libspdk_nbd.a 00:04:00.003 SO libspdk_nbd.so.7.0 00:04:00.003 SYMLINK libspdk_nbd.so 00:04:00.003 LIB libspdk_scsi.a 00:04:00.003 SO libspdk_scsi.so.9.0 00:04:00.003 SYMLINK libspdk_scsi.so 00:04:00.261 LIB libspdk_ublk.a 00:04:00.261 SO libspdk_ublk.so.3.0 00:04:00.261 SYMLINK libspdk_ublk.so 00:04:00.261 CC lib/vhost/vhost.o 00:04:00.261 CC lib/iscsi/conn.o 00:04:00.261 CC lib/vhost/vhost_rpc.o 00:04:00.261 CC lib/iscsi/init_grp.o 00:04:00.261 CC lib/vhost/vhost_scsi.o 00:04:00.261 CC lib/iscsi/iscsi.o 00:04:00.261 CC lib/vhost/vhost_blk.o 00:04:00.261 CC lib/iscsi/param.o 00:04:00.261 CC lib/vhost/rte_vhost_user.o 00:04:00.261 CC lib/iscsi/portal_grp.o 00:04:00.261 CC lib/iscsi/tgt_node.o 00:04:00.261 CC lib/iscsi/iscsi_subsystem.o 00:04:00.261 CC lib/iscsi/iscsi_rpc.o 00:04:00.261 CC lib/iscsi/task.o 00:04:00.520 LIB libspdk_ftl.a 00:04:00.520 SO libspdk_ftl.so.9.0 00:04:00.778 SYMLINK libspdk_ftl.so 00:04:01.713 LIB libspdk_vhost.a 00:04:01.713 SO libspdk_vhost.so.8.0 00:04:01.713 SYMLINK libspdk_vhost.so 00:04:01.713 LIB libspdk_nvmf.a 00:04:01.713 LIB libspdk_iscsi.a 00:04:01.713 SO libspdk_nvmf.so.20.0 00:04:01.713 SO libspdk_iscsi.so.8.0 00:04:01.971 SYMLINK libspdk_iscsi.so 00:04:01.971 SYMLINK libspdk_nvmf.so 00:04:02.230 CC module/env_dpdk/env_dpdk_rpc.o 00:04:02.230 CC module/vfu_device/vfu_virtio.o 00:04:02.230 CC module/vfu_device/vfu_virtio_blk.o 00:04:02.230 CC module/vfu_device/vfu_virtio_scsi.o 00:04:02.230 CC module/vfu_device/vfu_virtio_rpc.o 00:04:02.230 CC module/vfu_device/vfu_virtio_fs.o 00:04:02.230 CC module/accel/iaa/accel_iaa.o 00:04:02.230 CC module/scheduler/gscheduler/gscheduler.o 00:04:02.230 CC module/accel/iaa/accel_iaa_rpc.o 00:04:02.230 CC module/keyring/file/keyring.o 00:04:02.230 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:02.230 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:02.230 CC module/accel/dsa/accel_dsa.o 00:04:02.230 CC module/keyring/file/keyring_rpc.o 00:04:02.230 CC module/accel/dsa/accel_dsa_rpc.o 00:04:02.230 CC module/accel/error/accel_error.o 00:04:02.230 CC module/sock/posix/posix.o 00:04:02.230 CC module/accel/error/accel_error_rpc.o 00:04:02.230 CC module/keyring/linux/keyring.o 00:04:02.230 CC module/keyring/linux/keyring_rpc.o 00:04:02.230 CC module/fsdev/aio/fsdev_aio.o 00:04:02.230 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:02.230 CC module/fsdev/aio/linux_aio_mgr.o 00:04:02.230 CC module/blob/bdev/blob_bdev.o 00:04:02.230 CC module/accel/ioat/accel_ioat.o 00:04:02.489 CC module/accel/ioat/accel_ioat_rpc.o 00:04:02.489 LIB libspdk_env_dpdk_rpc.a 00:04:02.489 SO libspdk_env_dpdk_rpc.so.6.0 00:04:02.489 LIB libspdk_scheduler_gscheduler.a 00:04:02.489 LIB libspdk_scheduler_dpdk_governor.a 00:04:02.489 SYMLINK libspdk_env_dpdk_rpc.so 00:04:02.489 SO libspdk_scheduler_gscheduler.so.4.0 00:04:02.489 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:02.489 LIB libspdk_scheduler_dynamic.a 00:04:02.489 LIB libspdk_accel_ioat.a 00:04:02.489 LIB libspdk_accel_error.a 00:04:02.489 LIB libspdk_accel_iaa.a 00:04:02.489 LIB libspdk_keyring_file.a 00:04:02.748 SO libspdk_scheduler_dynamic.so.4.0 00:04:02.748 LIB libspdk_keyring_linux.a 00:04:02.748 SYMLINK libspdk_scheduler_gscheduler.so 00:04:02.748 SO libspdk_accel_ioat.so.6.0 00:04:02.748 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:02.748 SO libspdk_accel_error.so.2.0 00:04:02.748 SO libspdk_accel_iaa.so.3.0 00:04:02.748 SO libspdk_keyring_file.so.2.0 00:04:02.748 SO libspdk_keyring_linux.so.1.0 00:04:02.748 SYMLINK libspdk_scheduler_dynamic.so 00:04:02.748 SYMLINK libspdk_accel_ioat.so 00:04:02.748 SYMLINK libspdk_accel_error.so 00:04:02.748 SYMLINK libspdk_keyring_file.so 00:04:02.748 SYMLINK libspdk_accel_iaa.so 00:04:02.748 SYMLINK libspdk_keyring_linux.so 00:04:02.748 LIB libspdk_blob_bdev.a 00:04:02.748 SO libspdk_blob_bdev.so.12.0 00:04:02.748 LIB libspdk_accel_dsa.a 00:04:02.748 SO libspdk_accel_dsa.so.5.0 00:04:02.748 SYMLINK libspdk_blob_bdev.so 00:04:02.748 SYMLINK libspdk_accel_dsa.so 00:04:03.009 LIB libspdk_vfu_device.a 00:04:03.009 SO libspdk_vfu_device.so.3.0 00:04:03.009 CC module/bdev/lvol/vbdev_lvol.o 00:04:03.009 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:03.009 CC module/bdev/delay/vbdev_delay.o 00:04:03.009 CC module/blobfs/bdev/blobfs_bdev.o 00:04:03.009 CC module/bdev/split/vbdev_split.o 00:04:03.009 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:03.009 CC module/bdev/split/vbdev_split_rpc.o 00:04:03.009 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:03.009 CC module/bdev/gpt/gpt.o 00:04:03.009 CC module/bdev/error/vbdev_error.o 00:04:03.009 CC module/bdev/gpt/vbdev_gpt.o 00:04:03.009 CC module/bdev/null/bdev_null.o 00:04:03.009 CC module/bdev/error/vbdev_error_rpc.o 00:04:03.009 CC module/bdev/malloc/bdev_malloc.o 00:04:03.009 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:03.009 CC module/bdev/null/bdev_null_rpc.o 00:04:03.009 CC module/bdev/raid/bdev_raid.o 00:04:03.009 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:03.009 CC module/bdev/nvme/bdev_nvme.o 00:04:03.009 CC module/bdev/raid/bdev_raid_rpc.o 00:04:03.009 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:03.009 CC module/bdev/passthru/vbdev_passthru.o 00:04:03.009 CC module/bdev/raid/bdev_raid_sb.o 00:04:03.009 CC module/bdev/ftl/bdev_ftl.o 00:04:03.009 CC module/bdev/iscsi/bdev_iscsi.o 00:04:03.009 CC module/bdev/raid/raid0.o 00:04:03.009 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:03.009 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:03.009 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:03.009 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:03.009 CC module/bdev/nvme/nvme_rpc.o 00:04:03.009 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:03.009 CC module/bdev/raid/raid1.o 00:04:03.009 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:03.009 CC module/bdev/nvme/bdev_mdns_client.o 00:04:03.009 CC module/bdev/raid/concat.o 00:04:03.009 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:03.009 CC module/bdev/nvme/vbdev_opal.o 00:04:03.009 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:03.009 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:03.009 CC module/bdev/aio/bdev_aio.o 00:04:03.009 SYMLINK libspdk_vfu_device.so 00:04:03.009 CC module/bdev/aio/bdev_aio_rpc.o 00:04:03.009 LIB libspdk_fsdev_aio.a 00:04:03.269 SO libspdk_fsdev_aio.so.1.0 00:04:03.269 SYMLINK libspdk_fsdev_aio.so 00:04:03.527 LIB libspdk_sock_posix.a 00:04:03.527 SO libspdk_sock_posix.so.6.0 00:04:03.527 LIB libspdk_blobfs_bdev.a 00:04:03.527 SO libspdk_blobfs_bdev.so.6.0 00:04:03.527 SYMLINK libspdk_sock_posix.so 00:04:03.527 SYMLINK libspdk_blobfs_bdev.so 00:04:03.527 LIB libspdk_bdev_split.a 00:04:03.527 SO libspdk_bdev_split.so.6.0 00:04:03.527 LIB libspdk_bdev_malloc.a 00:04:03.527 LIB libspdk_bdev_ftl.a 00:04:03.527 LIB libspdk_bdev_error.a 00:04:03.527 LIB libspdk_bdev_null.a 00:04:03.527 LIB libspdk_bdev_gpt.a 00:04:03.527 SO libspdk_bdev_malloc.so.6.0 00:04:03.527 SO libspdk_bdev_null.so.6.0 00:04:03.527 SO libspdk_bdev_ftl.so.6.0 00:04:03.527 SO libspdk_bdev_error.so.6.0 00:04:03.527 LIB libspdk_bdev_passthru.a 00:04:03.527 SO libspdk_bdev_gpt.so.6.0 00:04:03.785 SYMLINK libspdk_bdev_split.so 00:04:03.785 SO libspdk_bdev_passthru.so.6.0 00:04:03.785 LIB libspdk_bdev_zone_block.a 00:04:03.785 LIB libspdk_bdev_aio.a 00:04:03.785 SYMLINK libspdk_bdev_malloc.so 00:04:03.785 SYMLINK libspdk_bdev_null.so 00:04:03.785 SYMLINK libspdk_bdev_error.so 00:04:03.785 SYMLINK libspdk_bdev_ftl.so 00:04:03.785 SO libspdk_bdev_zone_block.so.6.0 00:04:03.785 SYMLINK libspdk_bdev_gpt.so 00:04:03.785 SO libspdk_bdev_aio.so.6.0 00:04:03.785 LIB libspdk_bdev_delay.a 00:04:03.785 SYMLINK libspdk_bdev_passthru.so 00:04:03.785 LIB libspdk_bdev_iscsi.a 00:04:03.785 SO libspdk_bdev_delay.so.6.0 00:04:03.785 SYMLINK libspdk_bdev_zone_block.so 00:04:03.785 SO libspdk_bdev_iscsi.so.6.0 00:04:03.785 SYMLINK libspdk_bdev_aio.so 00:04:03.785 SYMLINK libspdk_bdev_delay.so 00:04:03.785 LIB libspdk_bdev_lvol.a 00:04:03.785 SYMLINK libspdk_bdev_iscsi.so 00:04:03.785 SO libspdk_bdev_lvol.so.6.0 00:04:03.785 LIB libspdk_bdev_virtio.a 00:04:03.785 SYMLINK libspdk_bdev_lvol.so 00:04:03.785 SO libspdk_bdev_virtio.so.6.0 00:04:04.043 SYMLINK libspdk_bdev_virtio.so 00:04:04.301 LIB libspdk_bdev_raid.a 00:04:04.301 SO libspdk_bdev_raid.so.6.0 00:04:04.559 SYMLINK libspdk_bdev_raid.so 00:04:05.936 LIB libspdk_bdev_nvme.a 00:04:05.936 SO libspdk_bdev_nvme.so.7.1 00:04:05.936 SYMLINK libspdk_bdev_nvme.so 00:04:06.195 CC module/event/subsystems/iobuf/iobuf.o 00:04:06.195 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:06.195 CC module/event/subsystems/sock/sock.o 00:04:06.195 CC module/event/subsystems/fsdev/fsdev.o 00:04:06.195 CC module/event/subsystems/scheduler/scheduler.o 00:04:06.195 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:06.195 CC module/event/subsystems/keyring/keyring.o 00:04:06.195 CC module/event/subsystems/vmd/vmd.o 00:04:06.195 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:06.195 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:06.454 LIB libspdk_event_keyring.a 00:04:06.454 LIB libspdk_event_fsdev.a 00:04:06.454 LIB libspdk_event_vhost_blk.a 00:04:06.454 LIB libspdk_event_scheduler.a 00:04:06.454 LIB libspdk_event_vfu_tgt.a 00:04:06.454 LIB libspdk_event_vmd.a 00:04:06.454 LIB libspdk_event_sock.a 00:04:06.454 SO libspdk_event_keyring.so.1.0 00:04:06.454 LIB libspdk_event_iobuf.a 00:04:06.454 SO libspdk_event_vhost_blk.so.3.0 00:04:06.454 SO libspdk_event_fsdev.so.1.0 00:04:06.454 SO libspdk_event_scheduler.so.4.0 00:04:06.454 SO libspdk_event_vfu_tgt.so.3.0 00:04:06.454 SO libspdk_event_sock.so.5.0 00:04:06.454 SO libspdk_event_vmd.so.6.0 00:04:06.454 SO libspdk_event_iobuf.so.3.0 00:04:06.454 SYMLINK libspdk_event_keyring.so 00:04:06.454 SYMLINK libspdk_event_vhost_blk.so 00:04:06.454 SYMLINK libspdk_event_fsdev.so 00:04:06.454 SYMLINK libspdk_event_vfu_tgt.so 00:04:06.454 SYMLINK libspdk_event_scheduler.so 00:04:06.454 SYMLINK libspdk_event_sock.so 00:04:06.454 SYMLINK libspdk_event_vmd.so 00:04:06.454 SYMLINK libspdk_event_iobuf.so 00:04:06.730 CC module/event/subsystems/accel/accel.o 00:04:07.036 LIB libspdk_event_accel.a 00:04:07.036 SO libspdk_event_accel.so.6.0 00:04:07.036 SYMLINK libspdk_event_accel.so 00:04:07.295 CC module/event/subsystems/bdev/bdev.o 00:04:07.295 LIB libspdk_event_bdev.a 00:04:07.295 SO libspdk_event_bdev.so.6.0 00:04:07.295 SYMLINK libspdk_event_bdev.so 00:04:07.552 CC module/event/subsystems/ublk/ublk.o 00:04:07.552 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:07.552 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:07.552 CC module/event/subsystems/nbd/nbd.o 00:04:07.552 CC module/event/subsystems/scsi/scsi.o 00:04:07.810 LIB libspdk_event_nbd.a 00:04:07.810 LIB libspdk_event_ublk.a 00:04:07.810 SO libspdk_event_ublk.so.3.0 00:04:07.810 SO libspdk_event_nbd.so.6.0 00:04:07.810 LIB libspdk_event_scsi.a 00:04:07.810 SO libspdk_event_scsi.so.6.0 00:04:07.810 SYMLINK libspdk_event_ublk.so 00:04:07.810 SYMLINK libspdk_event_nbd.so 00:04:07.810 SYMLINK libspdk_event_scsi.so 00:04:07.810 LIB libspdk_event_nvmf.a 00:04:07.810 SO libspdk_event_nvmf.so.6.0 00:04:07.810 SYMLINK libspdk_event_nvmf.so 00:04:08.069 CC module/event/subsystems/iscsi/iscsi.o 00:04:08.069 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:08.069 LIB libspdk_event_vhost_scsi.a 00:04:08.069 LIB libspdk_event_iscsi.a 00:04:08.069 SO libspdk_event_vhost_scsi.so.3.0 00:04:08.327 SO libspdk_event_iscsi.so.6.0 00:04:08.327 SYMLINK libspdk_event_vhost_scsi.so 00:04:08.327 SYMLINK libspdk_event_iscsi.so 00:04:08.327 SO libspdk.so.6.0 00:04:08.327 SYMLINK libspdk.so 00:04:08.594 CC app/trace_record/trace_record.o 00:04:08.594 CXX app/trace/trace.o 00:04:08.594 CC app/spdk_lspci/spdk_lspci.o 00:04:08.594 CC app/spdk_top/spdk_top.o 00:04:08.594 CC app/spdk_nvme_identify/identify.o 00:04:08.594 CC test/rpc_client/rpc_client_test.o 00:04:08.594 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.594 TEST_HEADER include/spdk/accel.h 00:04:08.594 TEST_HEADER include/spdk/accel_module.h 00:04:08.594 TEST_HEADER include/spdk/assert.h 00:04:08.594 TEST_HEADER include/spdk/barrier.h 00:04:08.594 TEST_HEADER include/spdk/base64.h 00:04:08.594 TEST_HEADER include/spdk/bdev.h 00:04:08.594 CC app/spdk_nvme_perf/perf.o 00:04:08.594 TEST_HEADER include/spdk/bdev_module.h 00:04:08.594 TEST_HEADER include/spdk/bdev_zone.h 00:04:08.594 TEST_HEADER include/spdk/bit_array.h 00:04:08.594 TEST_HEADER include/spdk/bit_pool.h 00:04:08.594 TEST_HEADER include/spdk/blob_bdev.h 00:04:08.594 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:08.594 TEST_HEADER include/spdk/blob.h 00:04:08.594 TEST_HEADER include/spdk/blobfs.h 00:04:08.594 TEST_HEADER include/spdk/conf.h 00:04:08.594 TEST_HEADER include/spdk/config.h 00:04:08.594 TEST_HEADER include/spdk/cpuset.h 00:04:08.594 TEST_HEADER include/spdk/crc16.h 00:04:08.594 TEST_HEADER include/spdk/crc32.h 00:04:08.594 TEST_HEADER include/spdk/crc64.h 00:04:08.594 TEST_HEADER include/spdk/dif.h 00:04:08.594 TEST_HEADER include/spdk/dma.h 00:04:08.594 TEST_HEADER include/spdk/endian.h 00:04:08.594 TEST_HEADER include/spdk/env_dpdk.h 00:04:08.594 TEST_HEADER include/spdk/env.h 00:04:08.594 TEST_HEADER include/spdk/event.h 00:04:08.594 TEST_HEADER include/spdk/fd_group.h 00:04:08.594 TEST_HEADER include/spdk/fd.h 00:04:08.594 TEST_HEADER include/spdk/file.h 00:04:08.594 TEST_HEADER include/spdk/fsdev.h 00:04:08.594 TEST_HEADER include/spdk/fsdev_module.h 00:04:08.594 TEST_HEADER include/spdk/ftl.h 00:04:08.594 TEST_HEADER include/spdk/gpt_spec.h 00:04:08.594 TEST_HEADER include/spdk/hexlify.h 00:04:08.594 TEST_HEADER include/spdk/idxd.h 00:04:08.594 TEST_HEADER include/spdk/histogram_data.h 00:04:08.594 TEST_HEADER include/spdk/idxd_spec.h 00:04:08.594 TEST_HEADER include/spdk/init.h 00:04:08.594 TEST_HEADER include/spdk/ioat.h 00:04:08.594 TEST_HEADER include/spdk/ioat_spec.h 00:04:08.594 TEST_HEADER include/spdk/iscsi_spec.h 00:04:08.594 TEST_HEADER include/spdk/json.h 00:04:08.594 TEST_HEADER include/spdk/jsonrpc.h 00:04:08.594 TEST_HEADER include/spdk/keyring.h 00:04:08.594 TEST_HEADER include/spdk/keyring_module.h 00:04:08.594 TEST_HEADER include/spdk/likely.h 00:04:08.594 TEST_HEADER include/spdk/log.h 00:04:08.594 TEST_HEADER include/spdk/md5.h 00:04:08.594 TEST_HEADER include/spdk/lvol.h 00:04:08.594 TEST_HEADER include/spdk/memory.h 00:04:08.594 TEST_HEADER include/spdk/mmio.h 00:04:08.594 TEST_HEADER include/spdk/nbd.h 00:04:08.594 TEST_HEADER include/spdk/net.h 00:04:08.594 TEST_HEADER include/spdk/notify.h 00:04:08.594 TEST_HEADER include/spdk/nvme.h 00:04:08.594 TEST_HEADER include/spdk/nvme_intel.h 00:04:08.594 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:08.594 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:08.594 TEST_HEADER include/spdk/nvme_zns.h 00:04:08.594 TEST_HEADER include/spdk/nvme_spec.h 00:04:08.594 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:08.594 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:08.594 TEST_HEADER include/spdk/nvmf.h 00:04:08.594 TEST_HEADER include/spdk/nvmf_spec.h 00:04:08.594 TEST_HEADER include/spdk/nvmf_transport.h 00:04:08.594 TEST_HEADER include/spdk/opal_spec.h 00:04:08.594 TEST_HEADER include/spdk/opal.h 00:04:08.594 TEST_HEADER include/spdk/pci_ids.h 00:04:08.594 TEST_HEADER include/spdk/pipe.h 00:04:08.594 TEST_HEADER include/spdk/reduce.h 00:04:08.594 TEST_HEADER include/spdk/queue.h 00:04:08.594 TEST_HEADER include/spdk/scheduler.h 00:04:08.594 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:08.594 TEST_HEADER include/spdk/rpc.h 00:04:08.594 TEST_HEADER include/spdk/scsi.h 00:04:08.594 TEST_HEADER include/spdk/scsi_spec.h 00:04:08.594 TEST_HEADER include/spdk/sock.h 00:04:08.594 TEST_HEADER include/spdk/stdinc.h 00:04:08.594 TEST_HEADER include/spdk/thread.h 00:04:08.594 TEST_HEADER include/spdk/string.h 00:04:08.594 TEST_HEADER include/spdk/trace.h 00:04:08.594 TEST_HEADER include/spdk/trace_parser.h 00:04:08.594 TEST_HEADER include/spdk/tree.h 00:04:08.594 TEST_HEADER include/spdk/ublk.h 00:04:08.594 TEST_HEADER include/spdk/util.h 00:04:08.594 TEST_HEADER include/spdk/version.h 00:04:08.594 TEST_HEADER include/spdk/uuid.h 00:04:08.595 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:08.595 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:08.595 TEST_HEADER include/spdk/vhost.h 00:04:08.595 TEST_HEADER include/spdk/vmd.h 00:04:08.595 TEST_HEADER include/spdk/xor.h 00:04:08.595 TEST_HEADER include/spdk/zipf.h 00:04:08.595 CXX test/cpp_headers/accel.o 00:04:08.595 CXX test/cpp_headers/accel_module.o 00:04:08.595 CXX test/cpp_headers/assert.o 00:04:08.595 CXX test/cpp_headers/barrier.o 00:04:08.595 CXX test/cpp_headers/base64.o 00:04:08.595 CXX test/cpp_headers/bdev.o 00:04:08.595 CXX test/cpp_headers/bdev_module.o 00:04:08.595 CXX test/cpp_headers/bdev_zone.o 00:04:08.595 CXX test/cpp_headers/bit_array.o 00:04:08.595 CXX test/cpp_headers/bit_pool.o 00:04:08.595 CC app/spdk_dd/spdk_dd.o 00:04:08.595 CC app/nvmf_tgt/nvmf_main.o 00:04:08.595 CXX test/cpp_headers/blob_bdev.o 00:04:08.595 CXX test/cpp_headers/blobfs_bdev.o 00:04:08.595 CXX test/cpp_headers/blobfs.o 00:04:08.595 CXX test/cpp_headers/blob.o 00:04:08.595 CXX test/cpp_headers/conf.o 00:04:08.595 CXX test/cpp_headers/config.o 00:04:08.595 CXX test/cpp_headers/cpuset.o 00:04:08.595 CXX test/cpp_headers/crc16.o 00:04:08.595 CC app/iscsi_tgt/iscsi_tgt.o 00:04:08.858 CXX test/cpp_headers/crc32.o 00:04:08.858 CC app/spdk_tgt/spdk_tgt.o 00:04:08.858 CC test/env/pci/pci_ut.o 00:04:08.858 CC examples/ioat/perf/perf.o 00:04:08.858 CC examples/ioat/verify/verify.o 00:04:08.858 CC test/env/vtophys/vtophys.o 00:04:08.858 CC examples/util/zipf/zipf.o 00:04:08.858 CC test/env/memory/memory_ut.o 00:04:08.858 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:08.858 CC test/app/stub/stub.o 00:04:08.858 CC test/thread/poller_perf/poller_perf.o 00:04:08.858 CC test/app/histogram_perf/histogram_perf.o 00:04:08.858 CC test/app/jsoncat/jsoncat.o 00:04:08.858 CC app/fio/nvme/fio_plugin.o 00:04:08.858 CC app/fio/bdev/fio_plugin.o 00:04:08.858 CC test/app/bdev_svc/bdev_svc.o 00:04:08.858 CC test/dma/test_dma/test_dma.o 00:04:08.858 LINK spdk_lspci 00:04:09.128 CC test/env/mem_callbacks/mem_callbacks.o 00:04:09.128 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:09.128 LINK rpc_client_test 00:04:09.128 LINK spdk_nvme_discover 00:04:09.128 LINK interrupt_tgt 00:04:09.128 LINK poller_perf 00:04:09.128 LINK jsoncat 00:04:09.128 LINK vtophys 00:04:09.128 LINK histogram_perf 00:04:09.128 LINK spdk_trace_record 00:04:09.128 CXX test/cpp_headers/crc64.o 00:04:09.128 LINK zipf 00:04:09.128 CXX test/cpp_headers/dif.o 00:04:09.128 CXX test/cpp_headers/dma.o 00:04:09.128 LINK env_dpdk_post_init 00:04:09.128 CXX test/cpp_headers/endian.o 00:04:09.128 CXX test/cpp_headers/env_dpdk.o 00:04:09.128 CXX test/cpp_headers/env.o 00:04:09.128 CXX test/cpp_headers/event.o 00:04:09.128 CXX test/cpp_headers/fd_group.o 00:04:09.128 CXX test/cpp_headers/fd.o 00:04:09.128 LINK nvmf_tgt 00:04:09.128 CXX test/cpp_headers/file.o 00:04:09.128 CXX test/cpp_headers/fsdev.o 00:04:09.128 LINK stub 00:04:09.128 LINK iscsi_tgt 00:04:09.128 CXX test/cpp_headers/fsdev_module.o 00:04:09.128 CXX test/cpp_headers/ftl.o 00:04:09.128 CXX test/cpp_headers/gpt_spec.o 00:04:09.128 CXX test/cpp_headers/hexlify.o 00:04:09.128 CXX test/cpp_headers/histogram_data.o 00:04:09.398 LINK bdev_svc 00:04:09.398 LINK spdk_tgt 00:04:09.398 CXX test/cpp_headers/idxd.o 00:04:09.398 LINK ioat_perf 00:04:09.398 LINK verify 00:04:09.398 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:09.398 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:09.398 CXX test/cpp_headers/idxd_spec.o 00:04:09.398 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:09.398 CXX test/cpp_headers/init.o 00:04:09.398 CXX test/cpp_headers/ioat.o 00:04:09.398 CXX test/cpp_headers/ioat_spec.o 00:04:09.398 CXX test/cpp_headers/iscsi_spec.o 00:04:09.398 LINK spdk_dd 00:04:09.398 CXX test/cpp_headers/json.o 00:04:09.398 CXX test/cpp_headers/jsonrpc.o 00:04:09.398 LINK spdk_trace 00:04:09.398 CXX test/cpp_headers/keyring.o 00:04:09.664 CXX test/cpp_headers/keyring_module.o 00:04:09.664 CXX test/cpp_headers/likely.o 00:04:09.664 CXX test/cpp_headers/log.o 00:04:09.664 CXX test/cpp_headers/lvol.o 00:04:09.664 LINK pci_ut 00:04:09.664 CXX test/cpp_headers/md5.o 00:04:09.664 CXX test/cpp_headers/memory.o 00:04:09.664 CXX test/cpp_headers/mmio.o 00:04:09.664 CXX test/cpp_headers/nbd.o 00:04:09.664 CXX test/cpp_headers/net.o 00:04:09.664 CXX test/cpp_headers/notify.o 00:04:09.664 CXX test/cpp_headers/nvme.o 00:04:09.664 CXX test/cpp_headers/nvme_intel.o 00:04:09.664 CXX test/cpp_headers/nvme_ocssd.o 00:04:09.664 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:09.664 CXX test/cpp_headers/nvme_spec.o 00:04:09.664 CXX test/cpp_headers/nvme_zns.o 00:04:09.664 CXX test/cpp_headers/nvmf_cmd.o 00:04:09.664 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:09.664 CXX test/cpp_headers/nvmf.o 00:04:09.664 CC test/event/reactor/reactor.o 00:04:09.664 CC test/event/reactor_perf/reactor_perf.o 00:04:09.664 CC test/event/event_perf/event_perf.o 00:04:09.664 CXX test/cpp_headers/nvmf_spec.o 00:04:09.664 CC test/event/app_repeat/app_repeat.o 00:04:09.923 CXX test/cpp_headers/nvmf_transport.o 00:04:09.923 CXX test/cpp_headers/opal.o 00:04:09.923 CXX test/cpp_headers/opal_spec.o 00:04:09.923 CXX test/cpp_headers/pci_ids.o 00:04:09.923 CXX test/cpp_headers/pipe.o 00:04:09.923 LINK nvme_fuzz 00:04:09.923 CC test/event/scheduler/scheduler.o 00:04:09.923 CXX test/cpp_headers/queue.o 00:04:09.923 LINK test_dma 00:04:09.923 CC examples/thread/thread/thread_ex.o 00:04:09.923 LINK spdk_nvme 00:04:09.923 CC examples/sock/hello_world/hello_sock.o 00:04:09.923 CC examples/vmd/lsvmd/lsvmd.o 00:04:09.923 CC examples/idxd/perf/perf.o 00:04:09.923 LINK spdk_bdev 00:04:09.923 CXX test/cpp_headers/reduce.o 00:04:09.923 CXX test/cpp_headers/rpc.o 00:04:09.923 CC examples/vmd/led/led.o 00:04:09.923 CXX test/cpp_headers/scheduler.o 00:04:09.923 CXX test/cpp_headers/scsi.o 00:04:09.923 CXX test/cpp_headers/scsi_spec.o 00:04:09.923 CXX test/cpp_headers/sock.o 00:04:09.923 CXX test/cpp_headers/stdinc.o 00:04:10.185 CXX test/cpp_headers/string.o 00:04:10.185 CXX test/cpp_headers/thread.o 00:04:10.185 CXX test/cpp_headers/trace.o 00:04:10.185 CXX test/cpp_headers/trace_parser.o 00:04:10.185 CXX test/cpp_headers/tree.o 00:04:10.185 LINK reactor 00:04:10.185 LINK reactor_perf 00:04:10.185 LINK event_perf 00:04:10.185 LINK app_repeat 00:04:10.185 CXX test/cpp_headers/ublk.o 00:04:10.185 CXX test/cpp_headers/util.o 00:04:10.185 CXX test/cpp_headers/uuid.o 00:04:10.185 CC app/vhost/vhost.o 00:04:10.185 CXX test/cpp_headers/version.o 00:04:10.185 CXX test/cpp_headers/vfio_user_pci.o 00:04:10.185 CXX test/cpp_headers/vfio_user_spec.o 00:04:10.185 LINK vhost_fuzz 00:04:10.185 CXX test/cpp_headers/vhost.o 00:04:10.185 CXX test/cpp_headers/vmd.o 00:04:10.185 CXX test/cpp_headers/xor.o 00:04:10.185 CXX test/cpp_headers/zipf.o 00:04:10.185 LINK spdk_nvme_perf 00:04:10.185 LINK spdk_nvme_identify 00:04:10.185 LINK mem_callbacks 00:04:10.185 LINK lsvmd 00:04:10.444 LINK led 00:04:10.444 LINK scheduler 00:04:10.444 LINK spdk_top 00:04:10.444 LINK hello_sock 00:04:10.444 LINK thread 00:04:10.444 LINK vhost 00:04:10.444 CC test/nvme/aer/aer.o 00:04:10.444 CC test/nvme/startup/startup.o 00:04:10.444 CC test/nvme/sgl/sgl.o 00:04:10.444 CC test/nvme/simple_copy/simple_copy.o 00:04:10.444 CC test/nvme/cuse/cuse.o 00:04:10.444 CC test/nvme/overhead/overhead.o 00:04:10.444 CC test/nvme/e2edp/nvme_dp.o 00:04:10.444 CC test/nvme/fused_ordering/fused_ordering.o 00:04:10.444 CC test/nvme/err_injection/err_injection.o 00:04:10.444 CC test/nvme/connect_stress/connect_stress.o 00:04:10.444 CC test/nvme/reset/reset.o 00:04:10.444 CC test/nvme/fdp/fdp.o 00:04:10.444 CC test/nvme/compliance/nvme_compliance.o 00:04:10.444 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:10.444 CC test/nvme/reserve/reserve.o 00:04:10.444 CC test/nvme/boot_partition/boot_partition.o 00:04:10.703 CC test/accel/dif/dif.o 00:04:10.703 LINK idxd_perf 00:04:10.703 CC test/blobfs/mkfs/mkfs.o 00:04:10.703 CC test/lvol/esnap/esnap.o 00:04:10.703 LINK startup 00:04:10.703 LINK boot_partition 00:04:10.703 LINK connect_stress 00:04:10.703 LINK err_injection 00:04:10.703 LINK fused_ordering 00:04:10.963 CC examples/nvme/reconnect/reconnect.o 00:04:10.963 CC examples/nvme/hello_world/hello_world.o 00:04:10.963 CC examples/nvme/hotplug/hotplug.o 00:04:10.963 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:10.963 LINK reserve 00:04:10.963 CC examples/nvme/abort/abort.o 00:04:10.963 CC examples/nvme/arbitration/arbitration.o 00:04:10.963 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:10.963 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:10.963 LINK doorbell_aers 00:04:10.963 LINK memory_ut 00:04:10.963 LINK mkfs 00:04:10.963 LINK nvme_dp 00:04:10.963 LINK aer 00:04:10.963 LINK sgl 00:04:10.963 LINK simple_copy 00:04:10.963 LINK nvme_compliance 00:04:10.963 CC examples/accel/perf/accel_perf.o 00:04:10.963 LINK fdp 00:04:10.963 CC examples/blob/hello_world/hello_blob.o 00:04:10.963 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:10.963 CC examples/blob/cli/blobcli.o 00:04:10.963 LINK reset 00:04:10.963 LINK overhead 00:04:11.222 LINK cmb_copy 00:04:11.222 LINK hello_world 00:04:11.222 LINK pmr_persistence 00:04:11.222 LINK reconnect 00:04:11.222 LINK hotplug 00:04:11.222 LINK abort 00:04:11.222 LINK hello_blob 00:04:11.480 LINK arbitration 00:04:11.480 LINK hello_fsdev 00:04:11.481 LINK dif 00:04:11.481 LINK nvme_manage 00:04:11.481 LINK accel_perf 00:04:11.761 LINK blobcli 00:04:11.761 CC test/bdev/bdevio/bdevio.o 00:04:11.761 CC examples/bdev/hello_world/hello_bdev.o 00:04:11.761 CC examples/bdev/bdevperf/bdevperf.o 00:04:12.019 LINK iscsi_fuzz 00:04:12.019 LINK cuse 00:04:12.019 LINK hello_bdev 00:04:12.278 LINK bdevio 00:04:12.844 LINK bdevperf 00:04:13.102 CC examples/nvmf/nvmf/nvmf.o 00:04:13.359 LINK nvmf 00:04:15.889 LINK esnap 00:04:16.147 00:04:16.147 real 1m11.239s 00:04:16.147 user 11m51.072s 00:04:16.147 sys 2m40.716s 00:04:16.147 22:35:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:16.147 22:35:23 make -- common/autotest_common.sh@10 -- $ set +x 00:04:16.147 ************************************ 00:04:16.147 END TEST make 00:04:16.147 ************************************ 00:04:16.147 22:35:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:16.147 22:35:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:16.147 22:35:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:16.147 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.147 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:16.147 22:35:23 -- pm/common@44 -- $ pid=4059079 00:04:16.147 22:35:23 -- pm/common@50 -- $ kill -TERM 4059079 00:04:16.147 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.147 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:16.147 22:35:23 -- pm/common@44 -- $ pid=4059081 00:04:16.147 22:35:23 -- pm/common@50 -- $ kill -TERM 4059081 00:04:16.147 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.147 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:16.147 22:35:23 -- pm/common@44 -- $ pid=4059083 00:04:16.147 22:35:23 -- pm/common@50 -- $ kill -TERM 4059083 00:04:16.147 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.147 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:16.147 22:35:23 -- pm/common@44 -- $ pid=4059113 00:04:16.147 22:35:23 -- pm/common@50 -- $ sudo -E kill -TERM 4059113 00:04:16.147 22:35:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:16.147 22:35:23 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:16.406 22:35:23 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.406 22:35:23 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.406 22:35:23 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.406 22:35:23 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.406 22:35:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.406 22:35:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.406 22:35:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.406 22:35:23 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.406 22:35:23 -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.406 22:35:23 -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.406 22:35:23 -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.406 22:35:23 -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.406 22:35:23 -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.406 22:35:23 -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.406 22:35:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.406 22:35:23 -- scripts/common.sh@344 -- # case "$op" in 00:04:16.406 22:35:23 -- scripts/common.sh@345 -- # : 1 00:04:16.406 22:35:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.406 22:35:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.406 22:35:23 -- scripts/common.sh@365 -- # decimal 1 00:04:16.406 22:35:23 -- scripts/common.sh@353 -- # local d=1 00:04:16.406 22:35:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.406 22:35:23 -- scripts/common.sh@355 -- # echo 1 00:04:16.406 22:35:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.406 22:35:23 -- scripts/common.sh@366 -- # decimal 2 00:04:16.406 22:35:23 -- scripts/common.sh@353 -- # local d=2 00:04:16.406 22:35:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.406 22:35:23 -- scripts/common.sh@355 -- # echo 2 00:04:16.406 22:35:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.407 22:35:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.407 22:35:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.407 22:35:23 -- scripts/common.sh@368 -- # return 0 00:04:16.407 22:35:23 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.407 22:35:23 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.407 --rc genhtml_branch_coverage=1 00:04:16.407 --rc genhtml_function_coverage=1 00:04:16.407 --rc genhtml_legend=1 00:04:16.407 --rc geninfo_all_blocks=1 00:04:16.407 --rc geninfo_unexecuted_blocks=1 00:04:16.407 00:04:16.407 ' 00:04:16.407 22:35:23 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.407 --rc genhtml_branch_coverage=1 00:04:16.407 --rc genhtml_function_coverage=1 00:04:16.407 --rc genhtml_legend=1 00:04:16.407 --rc geninfo_all_blocks=1 00:04:16.407 --rc geninfo_unexecuted_blocks=1 00:04:16.407 00:04:16.407 ' 00:04:16.407 22:35:23 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.407 --rc genhtml_branch_coverage=1 00:04:16.407 --rc genhtml_function_coverage=1 00:04:16.407 --rc genhtml_legend=1 00:04:16.407 --rc geninfo_all_blocks=1 00:04:16.407 --rc geninfo_unexecuted_blocks=1 00:04:16.407 00:04:16.407 ' 00:04:16.407 22:35:23 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.407 --rc genhtml_branch_coverage=1 00:04:16.407 --rc genhtml_function_coverage=1 00:04:16.407 --rc genhtml_legend=1 00:04:16.407 --rc geninfo_all_blocks=1 00:04:16.407 --rc geninfo_unexecuted_blocks=1 00:04:16.407 00:04:16.407 ' 00:04:16.407 22:35:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.407 22:35:23 -- nvmf/common.sh@7 -- # uname -s 00:04:16.407 22:35:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.407 22:35:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.407 22:35:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.407 22:35:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.407 22:35:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.407 22:35:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.407 22:35:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.407 22:35:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.407 22:35:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.407 22:35:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.407 22:35:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:16.407 22:35:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:16.407 22:35:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.407 22:35:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.407 22:35:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:16.407 22:35:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.407 22:35:24 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:16.407 22:35:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.407 22:35:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.407 22:35:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.407 22:35:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.407 22:35:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.407 22:35:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.407 22:35:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.407 22:35:24 -- paths/export.sh@5 -- # export PATH 00:04:16.407 22:35:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.407 22:35:24 -- nvmf/common.sh@51 -- # : 0 00:04:16.407 22:35:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.407 22:35:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.407 22:35:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.407 22:35:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.407 22:35:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.407 22:35:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.407 22:35:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.407 22:35:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.407 22:35:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.407 22:35:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:16.407 22:35:24 -- spdk/autotest.sh@32 -- # uname -s 00:04:16.407 22:35:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:16.407 22:35:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:16.407 22:35:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:16.407 22:35:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:16.407 22:35:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:16.407 22:35:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:16.407 22:35:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:16.407 22:35:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:16.407 22:35:24 -- spdk/autotest.sh@48 -- # udevadm_pid=4119870 00:04:16.407 22:35:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:16.407 22:35:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:16.407 22:35:24 -- pm/common@17 -- # local monitor 00:04:16.407 22:35:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.407 22:35:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.407 22:35:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.407 22:35:24 -- pm/common@21 -- # date +%s 00:04:16.407 22:35:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.407 22:35:24 -- pm/common@21 -- # date +%s 00:04:16.407 22:35:24 -- pm/common@25 -- # sleep 1 00:04:16.407 22:35:24 -- pm/common@21 -- # date +%s 00:04:16.407 22:35:24 -- pm/common@21 -- # date +%s 00:04:16.407 22:35:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866524 00:04:16.407 22:35:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866524 00:04:16.407 22:35:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866524 00:04:16.407 22:35:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866524 00:04:16.407 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866524_collect-cpu-load.pm.log 00:04:16.407 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866524_collect-vmstat.pm.log 00:04:16.407 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866524_collect-cpu-temp.pm.log 00:04:16.407 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866524_collect-bmc-pm.bmc.pm.log 00:04:17.343 22:35:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:17.343 22:35:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:17.343 22:35:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.343 22:35:25 -- common/autotest_common.sh@10 -- # set +x 00:04:17.343 22:35:25 -- spdk/autotest.sh@59 -- # create_test_list 00:04:17.343 22:35:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:17.343 22:35:25 -- common/autotest_common.sh@10 -- # set +x 00:04:17.343 22:35:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:17.343 22:35:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.343 22:35:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.343 22:35:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:17.343 22:35:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.343 22:35:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:17.343 22:35:25 -- common/autotest_common.sh@1457 -- # uname 00:04:17.600 22:35:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:17.600 22:35:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:17.600 22:35:25 -- common/autotest_common.sh@1477 -- # uname 00:04:17.600 22:35:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:17.600 22:35:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:17.600 22:35:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:17.600 lcov: LCOV version 1.15 00:04:17.600 22:35:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:35.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:35.702 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:57.673 22:36:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:57.673 22:36:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.673 22:36:02 -- common/autotest_common.sh@10 -- # set +x 00:04:57.673 22:36:02 -- spdk/autotest.sh@78 -- # rm -f 00:04:57.673 22:36:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.674 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:57.674 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:57.674 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:57.674 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:57.674 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:57.674 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:57.674 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:57.674 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:57.674 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:57.674 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:57.674 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:57.674 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:57.674 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:57.674 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:57.674 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:57.674 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:57.674 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:57.674 22:36:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:57.674 22:36:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:57.674 22:36:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:57.674 22:36:04 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:57.674 22:36:04 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:57.674 22:36:04 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:57.674 22:36:04 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:57.674 22:36:04 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:04:57.674 22:36:04 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:57.674 22:36:04 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:57.674 22:36:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:57.674 22:36:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.674 22:36:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:57.674 22:36:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:57.674 22:36:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.674 22:36:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.674 22:36:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:57.674 22:36:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:57.674 22:36:04 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:57.674 No valid GPT data, bailing 00:04:57.674 22:36:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.674 22:36:04 -- scripts/common.sh@394 -- # pt= 00:04:57.674 22:36:04 -- scripts/common.sh@395 -- # return 1 00:04:57.674 22:36:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:57.674 1+0 records in 00:04:57.674 1+0 records out 00:04:57.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235593 s, 445 MB/s 00:04:57.674 22:36:04 -- spdk/autotest.sh@105 -- # sync 00:04:57.674 22:36:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:57.674 22:36:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:57.674 22:36:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:58.611 22:36:06 -- spdk/autotest.sh@111 -- # uname -s 00:04:58.611 22:36:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:58.611 22:36:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:58.611 22:36:06 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:59.991 Hugepages 00:04:59.991 node hugesize free / total 00:04:59.991 node0 1048576kB 0 / 0 00:04:59.991 node0 2048kB 0 / 0 00:04:59.991 node1 1048576kB 0 / 0 00:04:59.991 node1 2048kB 0 / 0 00:04:59.991 00:04:59.991 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.991 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:59.991 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:59.991 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:59.992 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:59.992 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:59.992 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:59.992 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:59.992 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:59.992 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:59.992 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:59.992 22:36:07 -- spdk/autotest.sh@117 -- # uname -s 00:04:59.992 22:36:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:59.992 22:36:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:59.992 22:36:07 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.366 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.366 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.366 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.366 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.366 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.366 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.366 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.366 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.366 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.306 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.306 22:36:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:03.247 22:36:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:03.247 22:36:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:03.247 22:36:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.247 22:36:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:03.247 22:36:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:03.247 22:36:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:03.247 22:36:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.247 22:36:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:03.247 22:36:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:03.512 22:36:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:03.512 22:36:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:03.512 22:36:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.468 Waiting for block devices as requested 00:05:04.728 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:04.728 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:04.988 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:04.988 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:04.988 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:04.988 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:05.248 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:05.248 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:05.248 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:05.507 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:05.507 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:05.507 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:05.507 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:05.766 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:05.766 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:05.766 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:05.766 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:06.026 22:36:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:06.026 22:36:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:06.026 22:36:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:06.026 22:36:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:06.026 22:36:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:06.026 22:36:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:06.026 22:36:13 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:06.026 22:36:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:06.026 22:36:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:06.026 22:36:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:06.026 22:36:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:06.026 22:36:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:06.026 22:36:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:06.026 22:36:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:06.026 22:36:13 -- common/autotest_common.sh@1543 -- # continue 00:05:06.026 22:36:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:06.026 22:36:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.026 22:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.026 22:36:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:06.026 22:36:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.026 22:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.026 22:36:13 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.405 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:07.405 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:07.405 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:07.405 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:07.405 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:07.405 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:07.405 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:07.405 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:07.405 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:08.343 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.343 22:36:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:08.343 22:36:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.343 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.343 22:36:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:08.343 22:36:16 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:08.343 22:36:16 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.343 22:36:16 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:08.343 22:36:16 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:08.343 22:36:16 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:08.343 22:36:16 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:08.343 22:36:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:08.343 22:36:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:08.343 22:36:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:08.343 22:36:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.343 22:36:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.343 22:36:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:08.603 22:36:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:08.603 22:36:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:08.603 22:36:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:08.603 22:36:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:08.603 22:36:16 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:08.603 22:36:16 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:08.603 22:36:16 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:08.603 22:36:16 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:08.603 22:36:16 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:08.603 22:36:16 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:08.603 22:36:16 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=4131003 00:05:08.603 22:36:16 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.603 22:36:16 -- common/autotest_common.sh@1585 -- # waitforlisten 4131003 00:05:08.603 22:36:16 -- common/autotest_common.sh@835 -- # '[' -z 4131003 ']' 00:05:08.603 22:36:16 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.603 22:36:16 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.603 22:36:16 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.603 22:36:16 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.603 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.603 [2024-12-10 22:36:16.149586] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:08.603 [2024-12-10 22:36:16.149677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131003 ] 00:05:08.603 [2024-12-10 22:36:16.217565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.603 [2024-12-10 22:36:16.277973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.863 22:36:16 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.863 22:36:16 -- common/autotest_common.sh@868 -- # return 0 00:05:08.863 22:36:16 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:08.863 22:36:16 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:08.863 22:36:16 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:12.188 nvme0n1 00:05:12.188 22:36:19 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:12.188 [2024-12-10 22:36:19.909079] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:12.188 [2024-12-10 22:36:19.909122] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:12.188 request: 00:05:12.188 { 00:05:12.188 "nvme_ctrlr_name": "nvme0", 00:05:12.188 "password": "test", 00:05:12.188 "method": "bdev_nvme_opal_revert", 00:05:12.188 "req_id": 1 00:05:12.188 } 00:05:12.188 Got JSON-RPC error response 00:05:12.188 response: 00:05:12.188 { 00:05:12.188 "code": -32603, 00:05:12.188 "message": "Internal error" 00:05:12.188 } 00:05:12.449 22:36:19 -- common/autotest_common.sh@1591 -- # true 00:05:12.449 22:36:19 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:12.449 22:36:19 -- common/autotest_common.sh@1595 -- # killprocess 4131003 00:05:12.449 22:36:19 -- common/autotest_common.sh@954 -- # '[' -z 4131003 ']' 00:05:12.449 22:36:19 -- common/autotest_common.sh@958 -- # kill -0 4131003 00:05:12.449 22:36:19 -- common/autotest_common.sh@959 -- # uname 00:05:12.449 22:36:19 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.449 22:36:19 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4131003 00:05:12.449 22:36:19 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.449 22:36:19 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.449 22:36:19 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4131003' 00:05:12.449 killing process with pid 4131003 00:05:12.449 22:36:19 -- common/autotest_common.sh@973 -- # kill 4131003 00:05:12.449 22:36:19 -- common/autotest_common.sh@978 -- # wait 4131003 00:05:14.353 22:36:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:14.353 22:36:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:14.353 22:36:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:14.353 22:36:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:14.353 22:36:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:14.353 22:36:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.353 22:36:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.353 22:36:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:14.353 22:36:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:14.353 22:36:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.353 22:36:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.353 22:36:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.353 ************************************ 00:05:14.353 START TEST env 00:05:14.353 ************************************ 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:14.353 * Looking for test storage... 00:05:14.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.353 22:36:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.353 22:36:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.353 22:36:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.353 22:36:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.353 22:36:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.353 22:36:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.353 22:36:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.353 22:36:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.353 22:36:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.353 22:36:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.353 22:36:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.353 22:36:21 env -- scripts/common.sh@344 -- # case "$op" in 00:05:14.353 22:36:21 env -- scripts/common.sh@345 -- # : 1 00:05:14.353 22:36:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.353 22:36:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.353 22:36:21 env -- scripts/common.sh@365 -- # decimal 1 00:05:14.353 22:36:21 env -- scripts/common.sh@353 -- # local d=1 00:05:14.353 22:36:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.353 22:36:21 env -- scripts/common.sh@355 -- # echo 1 00:05:14.353 22:36:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.353 22:36:21 env -- scripts/common.sh@366 -- # decimal 2 00:05:14.353 22:36:21 env -- scripts/common.sh@353 -- # local d=2 00:05:14.353 22:36:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.353 22:36:21 env -- scripts/common.sh@355 -- # echo 2 00:05:14.353 22:36:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.353 22:36:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.353 22:36:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.353 22:36:21 env -- scripts/common.sh@368 -- # return 0 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.353 --rc genhtml_branch_coverage=1 00:05:14.353 --rc genhtml_function_coverage=1 00:05:14.353 --rc genhtml_legend=1 00:05:14.353 --rc geninfo_all_blocks=1 00:05:14.353 --rc geninfo_unexecuted_blocks=1 00:05:14.353 00:05:14.353 ' 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.353 --rc genhtml_branch_coverage=1 00:05:14.353 --rc genhtml_function_coverage=1 00:05:14.353 --rc genhtml_legend=1 00:05:14.353 --rc geninfo_all_blocks=1 00:05:14.353 --rc geninfo_unexecuted_blocks=1 00:05:14.353 00:05:14.353 ' 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.353 --rc genhtml_branch_coverage=1 00:05:14.353 --rc genhtml_function_coverage=1 00:05:14.353 --rc genhtml_legend=1 00:05:14.353 --rc geninfo_all_blocks=1 00:05:14.353 --rc geninfo_unexecuted_blocks=1 00:05:14.353 00:05:14.353 ' 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.353 --rc genhtml_branch_coverage=1 00:05:14.353 --rc genhtml_function_coverage=1 00:05:14.353 --rc genhtml_legend=1 00:05:14.353 --rc geninfo_all_blocks=1 00:05:14.353 --rc geninfo_unexecuted_blocks=1 00:05:14.353 00:05:14.353 ' 00:05:14.353 22:36:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.353 22:36:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.353 22:36:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.353 ************************************ 00:05:14.353 START TEST env_memory 00:05:14.353 ************************************ 00:05:14.353 22:36:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:14.353 00:05:14.353 00:05:14.353 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.353 http://cunit.sourceforge.net/ 00:05:14.353 00:05:14.353 00:05:14.353 Suite: memory 00:05:14.353 Test: alloc and free memory map ...[2024-12-10 22:36:21.964589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:14.353 passed 00:05:14.353 Test: mem map translation ...[2024-12-10 22:36:21.985487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:14.353 [2024-12-10 22:36:21.985509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:14.353 [2024-12-10 22:36:21.985590] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:14.353 [2024-12-10 22:36:21.985605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:14.353 passed 00:05:14.353 Test: mem map registration ...[2024-12-10 22:36:22.027414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:14.353 [2024-12-10 22:36:22.027433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:14.353 passed 00:05:14.354 Test: mem map adjacent registrations ...passed 00:05:14.354 00:05:14.354 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.354 suites 1 1 n/a 0 0 00:05:14.354 tests 4 4 4 0 0 00:05:14.354 asserts 152 152 152 0 n/a 00:05:14.354 00:05:14.354 Elapsed time = 0.143 seconds 00:05:14.614 00:05:14.614 real 0m0.151s 00:05:14.614 user 0m0.143s 00:05:14.614 sys 0m0.008s 00:05:14.614 22:36:22 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.614 22:36:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:14.614 ************************************ 00:05:14.614 END TEST env_memory 00:05:14.614 ************************************ 00:05:14.614 22:36:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:14.614 22:36:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.614 22:36:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.614 22:36:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.614 ************************************ 00:05:14.614 START TEST env_vtophys 00:05:14.614 ************************************ 00:05:14.614 22:36:22 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:14.614 EAL: lib.eal log level changed from notice to debug 00:05:14.614 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.614 EAL: Detected lcore 1 as core 1 on socket 0 00:05:14.614 EAL: Detected lcore 2 as core 2 on socket 0 00:05:14.614 EAL: Detected lcore 3 as core 3 on socket 0 00:05:14.614 EAL: Detected lcore 4 as core 4 on socket 0 00:05:14.614 EAL: Detected lcore 5 as core 5 on socket 0 00:05:14.614 EAL: Detected lcore 6 as core 8 on socket 0 00:05:14.614 EAL: Detected lcore 7 as core 9 on socket 0 00:05:14.614 EAL: Detected lcore 8 as core 10 on socket 0 00:05:14.614 EAL: Detected lcore 9 as core 11 on socket 0 00:05:14.614 EAL: Detected lcore 10 as core 12 on socket 0 00:05:14.614 EAL: Detected lcore 11 as core 13 on socket 0 00:05:14.614 EAL: Detected lcore 12 as core 0 on socket 1 00:05:14.614 EAL: Detected lcore 13 as core 1 on socket 1 00:05:14.614 EAL: Detected lcore 14 as core 2 on socket 1 00:05:14.614 EAL: Detected lcore 15 as core 3 on socket 1 00:05:14.614 EAL: Detected lcore 16 as core 4 on socket 1 00:05:14.614 EAL: Detected lcore 17 as core 5 on socket 1 00:05:14.614 EAL: Detected lcore 18 as core 8 on socket 1 00:05:14.614 EAL: Detected lcore 19 as core 9 on socket 1 00:05:14.614 EAL: Detected lcore 20 as core 10 on socket 1 00:05:14.614 EAL: Detected lcore 21 as core 11 on socket 1 00:05:14.614 EAL: Detected lcore 22 as core 12 on socket 1 00:05:14.614 EAL: Detected lcore 23 as core 13 on socket 1 00:05:14.614 EAL: Detected lcore 24 as core 0 on socket 0 00:05:14.614 EAL: Detected lcore 25 as core 1 on socket 0 00:05:14.614 EAL: Detected lcore 26 as core 2 on socket 0 00:05:14.614 EAL: Detected lcore 27 as core 3 on socket 0 00:05:14.614 EAL: Detected lcore 28 as core 4 on socket 0 00:05:14.614 EAL: Detected lcore 29 as core 5 on socket 0 00:05:14.614 EAL: Detected lcore 30 as core 8 on socket 0 00:05:14.614 EAL: Detected lcore 31 as core 9 on socket 0 00:05:14.614 EAL: Detected lcore 32 as core 10 on socket 0 00:05:14.614 EAL: Detected lcore 33 as core 11 on socket 0 00:05:14.614 EAL: Detected lcore 34 as core 12 on socket 0 00:05:14.614 EAL: Detected lcore 35 as core 13 on socket 0 00:05:14.614 EAL: Detected lcore 36 as core 0 on socket 1 00:05:14.614 EAL: Detected lcore 37 as core 1 on socket 1 00:05:14.614 EAL: Detected lcore 38 as core 2 on socket 1 00:05:14.614 EAL: Detected lcore 39 as core 3 on socket 1 00:05:14.614 EAL: Detected lcore 40 as core 4 on socket 1 00:05:14.614 EAL: Detected lcore 41 as core 5 on socket 1 00:05:14.614 EAL: Detected lcore 42 as core 8 on socket 1 00:05:14.614 EAL: Detected lcore 43 as core 9 on socket 1 00:05:14.614 EAL: Detected lcore 44 as core 10 on socket 1 00:05:14.614 EAL: Detected lcore 45 as core 11 on socket 1 00:05:14.614 EAL: Detected lcore 46 as core 12 on socket 1 00:05:14.614 EAL: Detected lcore 47 as core 13 on socket 1 00:05:14.614 EAL: Maximum logical cores by configuration: 128 00:05:14.614 EAL: Detected CPU lcores: 48 00:05:14.614 EAL: Detected NUMA nodes: 2 00:05:14.614 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:14.614 EAL: Detected shared linkage of DPDK 00:05:14.614 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.614 EAL: Bus pci wants IOVA as 'DC' 00:05:14.614 EAL: Buses did not request a specific IOVA mode. 00:05:14.614 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:14.614 EAL: Selected IOVA mode 'VA' 00:05:14.614 EAL: Probing VFIO support... 00:05:14.614 EAL: IOMMU type 1 (Type 1) is supported 00:05:14.614 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:14.614 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:14.614 EAL: VFIO support initialized 00:05:14.614 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.614 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.614 EAL: Setting up physically contiguous memory... 00:05:14.614 EAL: Setting maximum number of open files to 524288 00:05:14.614 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.614 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:14.614 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.614 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.614 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.614 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.614 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.614 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.614 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.614 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.614 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.615 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.615 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.615 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.615 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.615 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.615 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.615 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.615 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:14.615 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.615 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:14.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.615 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.615 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:14.615 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:14.615 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.615 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:14.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.615 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.615 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:14.615 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:14.615 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.615 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:14.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.615 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.615 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:14.615 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:14.615 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.615 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:14.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.615 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.615 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:14.615 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:14.615 EAL: Hugepages will be freed exactly as allocated. 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: TSC frequency is ~2700000 KHz 00:05:14.615 EAL: Main lcore 0 is ready (tid=7f653f808a00;cpuset=[0]) 00:05:14.615 EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 0 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.615 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.615 00:05:14.615 00:05:14.615 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.615 http://cunit.sourceforge.net/ 00:05:14.615 00:05:14.615 00:05:14.615 Suite: components_suite 00:05:14.615 Test: vtophys_malloc_test ...passed 00:05:14.615 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 4 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.615 EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 4 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.615 EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 4 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.615 EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 4 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.615 EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 4 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.615 EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 4 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.615 EAL: Trying to obtain current memory policy. 00:05:14.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.615 EAL: Restoring previous memory policy: 4 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.615 EAL: request: mp_malloc_sync 00:05:14.615 EAL: No shared files mode enabled, IPC is disabled 00:05:14.615 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.615 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.874 EAL: request: mp_malloc_sync 00:05:14.874 EAL: No shared files mode enabled, IPC is disabled 00:05:14.874 EAL: Heap on socket 0 was shrunk by 130MB 00:05:14.874 EAL: Trying to obtain current memory policy. 00:05:14.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.874 EAL: Restoring previous memory policy: 4 00:05:14.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.874 EAL: request: mp_malloc_sync 00:05:14.874 EAL: No shared files mode enabled, IPC is disabled 00:05:14.874 EAL: Heap on socket 0 was expanded by 258MB 00:05:14.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.874 EAL: request: mp_malloc_sync 00:05:14.874 EAL: No shared files mode enabled, IPC is disabled 00:05:14.874 EAL: Heap on socket 0 was shrunk by 258MB 00:05:14.874 EAL: Trying to obtain current memory policy. 00:05:14.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.134 EAL: Restoring previous memory policy: 4 00:05:15.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.134 EAL: request: mp_malloc_sync 00:05:15.134 EAL: No shared files mode enabled, IPC is disabled 00:05:15.134 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.394 EAL: request: mp_malloc_sync 00:05:15.394 EAL: No shared files mode enabled, IPC is disabled 00:05:15.394 EAL: Heap on socket 0 was shrunk by 514MB 00:05:15.394 EAL: Trying to obtain current memory policy. 00:05:15.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.654 EAL: Restoring previous memory policy: 4 00:05:15.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.654 EAL: request: mp_malloc_sync 00:05:15.654 EAL: No shared files mode enabled, IPC is disabled 00:05:15.654 EAL: Heap on socket 0 was expanded by 1026MB 00:05:15.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.913 EAL: request: mp_malloc_sync 00:05:15.913 EAL: No shared files mode enabled, IPC is disabled 00:05:15.913 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:15.913 passed 00:05:15.913 00:05:15.913 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.913 suites 1 1 n/a 0 0 00:05:15.913 tests 2 2 2 0 0 00:05:15.913 asserts 497 497 497 0 n/a 00:05:15.913 00:05:15.913 Elapsed time = 1.336 seconds 00:05:15.913 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.913 EAL: request: mp_malloc_sync 00:05:15.913 EAL: No shared files mode enabled, IPC is disabled 00:05:15.913 EAL: Heap on socket 0 was shrunk by 2MB 00:05:15.913 EAL: No shared files mode enabled, IPC is disabled 00:05:15.913 EAL: No shared files mode enabled, IPC is disabled 00:05:15.913 EAL: No shared files mode enabled, IPC is disabled 00:05:15.913 00:05:15.913 real 0m1.460s 00:05:15.913 user 0m0.877s 00:05:15.913 sys 0m0.545s 00:05:15.913 22:36:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.913 22:36:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:15.913 ************************************ 00:05:15.913 END TEST env_vtophys 00:05:15.913 ************************************ 00:05:15.913 22:36:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:15.913 22:36:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.913 22:36:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.913 22:36:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.913 ************************************ 00:05:15.913 START TEST env_pci 00:05:15.913 ************************************ 00:05:15.913 22:36:23 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:16.173 00:05:16.173 00:05:16.173 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.173 http://cunit.sourceforge.net/ 00:05:16.173 00:05:16.173 00:05:16.173 Suite: pci 00:05:16.173 Test: pci_hook ...[2024-12-10 22:36:23.649822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4131901 has claimed it 00:05:16.173 EAL: Cannot find device (10000:00:01.0) 00:05:16.173 EAL: Failed to attach device on primary process 00:05:16.173 passed 00:05:16.173 00:05:16.173 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.173 suites 1 1 n/a 0 0 00:05:16.173 tests 1 1 1 0 0 00:05:16.173 asserts 25 25 25 0 n/a 00:05:16.173 00:05:16.173 Elapsed time = 0.021 seconds 00:05:16.173 00:05:16.173 real 0m0.035s 00:05:16.173 user 0m0.012s 00:05:16.173 sys 0m0.022s 00:05:16.173 22:36:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.173 22:36:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:16.173 ************************************ 00:05:16.173 END TEST env_pci 00:05:16.173 ************************************ 00:05:16.173 22:36:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:16.173 22:36:23 env -- env/env.sh@15 -- # uname 00:05:16.173 22:36:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:16.173 22:36:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:16.174 22:36:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.174 22:36:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:16.174 22:36:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.174 22:36:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.174 ************************************ 00:05:16.174 START TEST env_dpdk_post_init 00:05:16.174 ************************************ 00:05:16.174 22:36:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.174 EAL: Detected CPU lcores: 48 00:05:16.174 EAL: Detected NUMA nodes: 2 00:05:16.174 EAL: Detected shared linkage of DPDK 00:05:16.174 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.174 EAL: Selected IOVA mode 'VA' 00:05:16.174 EAL: VFIO support initialized 00:05:16.174 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.174 EAL: Using IOMMU type 1 (Type 1) 00:05:16.174 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:16.174 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:16.174 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:16.174 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:16.174 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:16.174 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:16.434 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:17.375 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:20.669 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:20.669 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:20.669 Starting DPDK initialization... 00:05:20.669 Starting SPDK post initialization... 00:05:20.669 SPDK NVMe probe 00:05:20.669 Attaching to 0000:88:00.0 00:05:20.669 Attached to 0000:88:00.0 00:05:20.669 Cleaning up... 00:05:20.669 00:05:20.669 real 0m4.390s 00:05:20.669 user 0m3.020s 00:05:20.669 sys 0m0.430s 00:05:20.669 22:36:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.669 22:36:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.669 ************************************ 00:05:20.669 END TEST env_dpdk_post_init 00:05:20.669 ************************************ 00:05:20.669 22:36:28 env -- env/env.sh@26 -- # uname 00:05:20.669 22:36:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:20.669 22:36:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.669 22:36:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.669 22:36:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.669 22:36:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.669 ************************************ 00:05:20.669 START TEST env_mem_callbacks 00:05:20.669 ************************************ 00:05:20.669 22:36:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.669 EAL: Detected CPU lcores: 48 00:05:20.669 EAL: Detected NUMA nodes: 2 00:05:20.670 EAL: Detected shared linkage of DPDK 00:05:20.670 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.670 EAL: Selected IOVA mode 'VA' 00:05:20.670 EAL: VFIO support initialized 00:05:20.670 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.670 00:05:20.670 00:05:20.670 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.670 http://cunit.sourceforge.net/ 00:05:20.670 00:05:20.670 00:05:20.670 Suite: memory 00:05:20.670 Test: test ... 00:05:20.670 register 0x200000200000 2097152 00:05:20.670 malloc 3145728 00:05:20.670 register 0x200000400000 4194304 00:05:20.670 buf 0x200000500000 len 3145728 PASSED 00:05:20.670 malloc 64 00:05:20.670 buf 0x2000004fff40 len 64 PASSED 00:05:20.670 malloc 4194304 00:05:20.670 register 0x200000800000 6291456 00:05:20.670 buf 0x200000a00000 len 4194304 PASSED 00:05:20.670 free 0x200000500000 3145728 00:05:20.670 free 0x2000004fff40 64 00:05:20.670 unregister 0x200000400000 4194304 PASSED 00:05:20.670 free 0x200000a00000 4194304 00:05:20.670 unregister 0x200000800000 6291456 PASSED 00:05:20.670 malloc 8388608 00:05:20.670 register 0x200000400000 10485760 00:05:20.670 buf 0x200000600000 len 8388608 PASSED 00:05:20.670 free 0x200000600000 8388608 00:05:20.670 unregister 0x200000400000 10485760 PASSED 00:05:20.670 passed 00:05:20.670 00:05:20.670 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.670 suites 1 1 n/a 0 0 00:05:20.670 tests 1 1 1 0 0 00:05:20.670 asserts 15 15 15 0 n/a 00:05:20.670 00:05:20.670 Elapsed time = 0.005 seconds 00:05:20.670 00:05:20.670 real 0m0.046s 00:05:20.670 user 0m0.015s 00:05:20.670 sys 0m0.031s 00:05:20.670 22:36:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.670 22:36:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:20.670 ************************************ 00:05:20.670 END TEST env_mem_callbacks 00:05:20.670 ************************************ 00:05:20.670 00:05:20.670 real 0m6.476s 00:05:20.670 user 0m4.258s 00:05:20.670 sys 0m1.261s 00:05:20.670 22:36:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.670 22:36:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.670 ************************************ 00:05:20.670 END TEST env 00:05:20.670 ************************************ 00:05:20.670 22:36:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.670 22:36:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.670 22:36:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.670 22:36:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.670 ************************************ 00:05:20.670 START TEST rpc 00:05:20.670 ************************************ 00:05:20.670 22:36:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.670 * Looking for test storage... 00:05:20.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:20.670 22:36:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.670 22:36:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.670 22:36:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.930 22:36:28 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.930 22:36:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.930 22:36:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.930 22:36:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.930 22:36:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.930 22:36:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.930 22:36:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.930 22:36:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.930 22:36:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.930 22:36:28 rpc -- scripts/common.sh@345 -- # : 1 00:05:20.930 22:36:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.930 22:36:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.930 22:36:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.930 22:36:28 rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.930 22:36:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.930 22:36:28 rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.930 22:36:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.930 22:36:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.930 22:36:28 rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.930 22:36:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.930 22:36:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.930 22:36:28 rpc -- scripts/common.sh@368 -- # return 0 00:05:20.930 22:36:28 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.930 22:36:28 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.930 --rc genhtml_branch_coverage=1 00:05:20.930 --rc genhtml_function_coverage=1 00:05:20.930 --rc genhtml_legend=1 00:05:20.930 --rc geninfo_all_blocks=1 00:05:20.930 --rc geninfo_unexecuted_blocks=1 00:05:20.930 00:05:20.930 ' 00:05:20.930 22:36:28 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.930 --rc genhtml_branch_coverage=1 00:05:20.930 --rc genhtml_function_coverage=1 00:05:20.930 --rc genhtml_legend=1 00:05:20.930 --rc geninfo_all_blocks=1 00:05:20.930 --rc geninfo_unexecuted_blocks=1 00:05:20.930 00:05:20.930 ' 00:05:20.930 22:36:28 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.930 --rc genhtml_branch_coverage=1 00:05:20.930 --rc genhtml_function_coverage=1 00:05:20.930 --rc genhtml_legend=1 00:05:20.930 --rc geninfo_all_blocks=1 00:05:20.930 --rc geninfo_unexecuted_blocks=1 00:05:20.930 00:05:20.930 ' 00:05:20.930 22:36:28 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.930 --rc genhtml_branch_coverage=1 00:05:20.930 --rc genhtml_function_coverage=1 00:05:20.930 --rc genhtml_legend=1 00:05:20.930 --rc geninfo_all_blocks=1 00:05:20.931 --rc geninfo_unexecuted_blocks=1 00:05:20.931 00:05:20.931 ' 00:05:20.931 22:36:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4132692 00:05:20.931 22:36:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:20.931 22:36:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.931 22:36:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4132692 00:05:20.931 22:36:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 4132692 ']' 00:05:20.931 22:36:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.931 22:36:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.931 22:36:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.931 22:36:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.931 22:36:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.931 [2024-12-10 22:36:28.461470] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:20.931 [2024-12-10 22:36:28.461588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4132692 ] 00:05:20.931 [2024-12-10 22:36:28.526367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.931 [2024-12-10 22:36:28.582194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:20.931 [2024-12-10 22:36:28.582250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4132692' to capture a snapshot of events at runtime. 00:05:20.931 [2024-12-10 22:36:28.582277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.931 [2024-12-10 22:36:28.582288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.931 [2024-12-10 22:36:28.582298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4132692 for offline analysis/debug. 00:05:20.931 [2024-12-10 22:36:28.582863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.192 22:36:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.192 22:36:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.192 22:36:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.192 22:36:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.192 22:36:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.192 22:36:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.192 22:36:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.192 22:36:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.192 22:36:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.192 ************************************ 00:05:21.192 START TEST rpc_integrity 00:05:21.192 ************************************ 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:21.192 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.192 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.192 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.192 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.192 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.192 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:21.192 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.192 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.453 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.453 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.453 { 00:05:21.453 "name": "Malloc0", 00:05:21.453 "aliases": [ 00:05:21.453 "91feec48-3ed3-47d6-b553-c447ced76589" 00:05:21.453 ], 00:05:21.453 "product_name": "Malloc disk", 00:05:21.453 "block_size": 512, 00:05:21.453 "num_blocks": 16384, 00:05:21.453 "uuid": "91feec48-3ed3-47d6-b553-c447ced76589", 00:05:21.453 "assigned_rate_limits": { 00:05:21.453 "rw_ios_per_sec": 0, 00:05:21.453 "rw_mbytes_per_sec": 0, 00:05:21.453 "r_mbytes_per_sec": 0, 00:05:21.453 "w_mbytes_per_sec": 0 00:05:21.453 }, 00:05:21.453 "claimed": false, 00:05:21.453 "zoned": false, 00:05:21.453 "supported_io_types": { 00:05:21.453 "read": true, 00:05:21.453 "write": true, 00:05:21.453 "unmap": true, 00:05:21.453 "flush": true, 00:05:21.453 "reset": true, 00:05:21.453 "nvme_admin": false, 00:05:21.453 "nvme_io": false, 00:05:21.453 "nvme_io_md": false, 00:05:21.453 "write_zeroes": true, 00:05:21.453 "zcopy": true, 00:05:21.453 "get_zone_info": false, 00:05:21.453 "zone_management": false, 00:05:21.453 "zone_append": false, 00:05:21.453 "compare": false, 00:05:21.453 "compare_and_write": false, 00:05:21.453 "abort": true, 00:05:21.453 "seek_hole": false, 00:05:21.453 "seek_data": false, 00:05:21.453 "copy": true, 00:05:21.453 "nvme_iov_md": false 00:05:21.453 }, 00:05:21.453 "memory_domains": [ 00:05:21.453 { 00:05:21.453 "dma_device_id": "system", 00:05:21.453 "dma_device_type": 1 00:05:21.453 }, 00:05:21.453 { 00:05:21.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.453 "dma_device_type": 2 00:05:21.453 } 00:05:21.453 ], 00:05:21.453 "driver_specific": {} 00:05:21.453 } 00:05:21.453 ]' 00:05:21.453 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.453 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.453 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:21.453 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.453 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.453 [2024-12-10 22:36:28.964341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:21.453 [2024-12-10 22:36:28.964389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.453 [2024-12-10 22:36:28.964411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1566620 00:05:21.453 [2024-12-10 22:36:28.964424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.453 [2024-12-10 22:36:28.965793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.453 [2024-12-10 22:36:28.965819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.453 Passthru0 00:05:21.453 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.453 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.453 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.453 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.453 22:36:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.453 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.453 { 00:05:21.453 "name": "Malloc0", 00:05:21.453 "aliases": [ 00:05:21.453 "91feec48-3ed3-47d6-b553-c447ced76589" 00:05:21.453 ], 00:05:21.453 "product_name": "Malloc disk", 00:05:21.453 "block_size": 512, 00:05:21.453 "num_blocks": 16384, 00:05:21.453 "uuid": "91feec48-3ed3-47d6-b553-c447ced76589", 00:05:21.453 "assigned_rate_limits": { 00:05:21.453 "rw_ios_per_sec": 0, 00:05:21.453 "rw_mbytes_per_sec": 0, 00:05:21.453 "r_mbytes_per_sec": 0, 00:05:21.453 "w_mbytes_per_sec": 0 00:05:21.453 }, 00:05:21.453 "claimed": true, 00:05:21.453 "claim_type": "exclusive_write", 00:05:21.453 "zoned": false, 00:05:21.453 "supported_io_types": { 00:05:21.453 "read": true, 00:05:21.453 "write": true, 00:05:21.453 "unmap": true, 00:05:21.453 "flush": true, 00:05:21.453 "reset": true, 00:05:21.453 "nvme_admin": false, 00:05:21.453 "nvme_io": false, 00:05:21.453 "nvme_io_md": false, 00:05:21.453 "write_zeroes": true, 00:05:21.453 "zcopy": true, 00:05:21.453 "get_zone_info": false, 00:05:21.453 "zone_management": false, 00:05:21.453 "zone_append": false, 00:05:21.453 "compare": false, 00:05:21.453 "compare_and_write": false, 00:05:21.453 "abort": true, 00:05:21.453 "seek_hole": false, 00:05:21.453 "seek_data": false, 00:05:21.453 "copy": true, 00:05:21.453 "nvme_iov_md": false 00:05:21.453 }, 00:05:21.453 "memory_domains": [ 00:05:21.453 { 00:05:21.453 "dma_device_id": "system", 00:05:21.453 "dma_device_type": 1 00:05:21.453 }, 00:05:21.453 { 00:05:21.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.453 "dma_device_type": 2 00:05:21.453 } 00:05:21.453 ], 00:05:21.453 "driver_specific": {} 00:05:21.453 }, 00:05:21.453 { 00:05:21.453 "name": "Passthru0", 00:05:21.453 "aliases": [ 00:05:21.453 "5c42baaf-2ef9-59f0-a6f4-0291ba76a424" 00:05:21.453 ], 00:05:21.453 "product_name": "passthru", 00:05:21.453 "block_size": 512, 00:05:21.453 "num_blocks": 16384, 00:05:21.453 "uuid": "5c42baaf-2ef9-59f0-a6f4-0291ba76a424", 00:05:21.453 "assigned_rate_limits": { 00:05:21.453 "rw_ios_per_sec": 0, 00:05:21.453 "rw_mbytes_per_sec": 0, 00:05:21.453 "r_mbytes_per_sec": 0, 00:05:21.453 "w_mbytes_per_sec": 0 00:05:21.453 }, 00:05:21.453 "claimed": false, 00:05:21.453 "zoned": false, 00:05:21.453 "supported_io_types": { 00:05:21.453 "read": true, 00:05:21.453 "write": true, 00:05:21.453 "unmap": true, 00:05:21.453 "flush": true, 00:05:21.453 "reset": true, 00:05:21.453 "nvme_admin": false, 00:05:21.453 "nvme_io": false, 00:05:21.453 "nvme_io_md": false, 00:05:21.453 "write_zeroes": true, 00:05:21.453 "zcopy": true, 00:05:21.453 "get_zone_info": false, 00:05:21.453 "zone_management": false, 00:05:21.453 "zone_append": false, 00:05:21.453 "compare": false, 00:05:21.453 "compare_and_write": false, 00:05:21.453 "abort": true, 00:05:21.453 "seek_hole": false, 00:05:21.453 "seek_data": false, 00:05:21.453 "copy": true, 00:05:21.453 "nvme_iov_md": false 00:05:21.453 }, 00:05:21.453 "memory_domains": [ 00:05:21.453 { 00:05:21.453 "dma_device_id": "system", 00:05:21.453 "dma_device_type": 1 00:05:21.453 }, 00:05:21.453 { 00:05:21.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.453 "dma_device_type": 2 00:05:21.453 } 00:05:21.453 ], 00:05:21.453 "driver_specific": { 00:05:21.453 "passthru": { 00:05:21.453 "name": "Passthru0", 00:05:21.453 "base_bdev_name": "Malloc0" 00:05:21.453 } 00:05:21.453 } 00:05:21.453 } 00:05:21.453 ]' 00:05:21.453 22:36:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.454 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.454 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.454 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.454 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.454 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.454 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.454 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.454 00:05:21.454 real 0m0.216s 00:05:21.454 user 0m0.139s 00:05:21.454 sys 0m0.024s 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.454 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.454 ************************************ 00:05:21.454 END TEST rpc_integrity 00:05:21.454 ************************************ 00:05:21.454 22:36:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.454 22:36:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.454 22:36:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.454 22:36:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.454 ************************************ 00:05:21.454 START TEST rpc_plugins 00:05:21.454 ************************************ 00:05:21.454 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:21.454 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.454 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.454 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.454 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.454 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.454 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.454 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.454 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.454 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.454 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.454 { 00:05:21.454 "name": "Malloc1", 00:05:21.454 "aliases": [ 00:05:21.454 "16d565f7-b80c-4127-ae60-ace0f61923ea" 00:05:21.454 ], 00:05:21.454 "product_name": "Malloc disk", 00:05:21.454 "block_size": 4096, 00:05:21.454 "num_blocks": 256, 00:05:21.454 "uuid": "16d565f7-b80c-4127-ae60-ace0f61923ea", 00:05:21.454 "assigned_rate_limits": { 00:05:21.454 "rw_ios_per_sec": 0, 00:05:21.454 "rw_mbytes_per_sec": 0, 00:05:21.454 "r_mbytes_per_sec": 0, 00:05:21.454 "w_mbytes_per_sec": 0 00:05:21.454 }, 00:05:21.454 "claimed": false, 00:05:21.454 "zoned": false, 00:05:21.454 "supported_io_types": { 00:05:21.454 "read": true, 00:05:21.454 "write": true, 00:05:21.454 "unmap": true, 00:05:21.454 "flush": true, 00:05:21.454 "reset": true, 00:05:21.454 "nvme_admin": false, 00:05:21.454 "nvme_io": false, 00:05:21.454 "nvme_io_md": false, 00:05:21.454 "write_zeroes": true, 00:05:21.454 "zcopy": true, 00:05:21.454 "get_zone_info": false, 00:05:21.454 "zone_management": false, 00:05:21.454 "zone_append": false, 00:05:21.454 "compare": false, 00:05:21.454 "compare_and_write": false, 00:05:21.454 "abort": true, 00:05:21.454 "seek_hole": false, 00:05:21.454 "seek_data": false, 00:05:21.454 "copy": true, 00:05:21.454 "nvme_iov_md": false 00:05:21.454 }, 00:05:21.454 "memory_domains": [ 00:05:21.454 { 00:05:21.454 "dma_device_id": "system", 00:05:21.454 "dma_device_type": 1 00:05:21.454 }, 00:05:21.454 { 00:05:21.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.454 "dma_device_type": 2 00:05:21.454 } 00:05:21.454 ], 00:05:21.454 "driver_specific": {} 00:05:21.454 } 00:05:21.454 ]' 00:05:21.454 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:21.733 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.733 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.733 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.733 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:21.733 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:21.733 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:21.733 00:05:21.733 real 0m0.108s 00:05:21.733 user 0m0.068s 00:05:21.733 sys 0m0.010s 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.733 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.733 ************************************ 00:05:21.733 END TEST rpc_plugins 00:05:21.733 ************************************ 00:05:21.733 22:36:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:21.733 22:36:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.733 22:36:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.733 22:36:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.733 ************************************ 00:05:21.733 START TEST rpc_trace_cmd_test 00:05:21.733 ************************************ 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:21.733 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4132692", 00:05:21.733 "tpoint_group_mask": "0x8", 00:05:21.733 "iscsi_conn": { 00:05:21.733 "mask": "0x2", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "scsi": { 00:05:21.733 "mask": "0x4", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "bdev": { 00:05:21.733 "mask": "0x8", 00:05:21.733 "tpoint_mask": "0xffffffffffffffff" 00:05:21.733 }, 00:05:21.733 "nvmf_rdma": { 00:05:21.733 "mask": "0x10", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "nvmf_tcp": { 00:05:21.733 "mask": "0x20", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "ftl": { 00:05:21.733 "mask": "0x40", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "blobfs": { 00:05:21.733 "mask": "0x80", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "dsa": { 00:05:21.733 "mask": "0x200", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "thread": { 00:05:21.733 "mask": "0x400", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "nvme_pcie": { 00:05:21.733 "mask": "0x800", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "iaa": { 00:05:21.733 "mask": "0x1000", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "nvme_tcp": { 00:05:21.733 "mask": "0x2000", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "bdev_nvme": { 00:05:21.733 "mask": "0x4000", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "sock": { 00:05:21.733 "mask": "0x8000", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "blob": { 00:05:21.733 "mask": "0x10000", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "bdev_raid": { 00:05:21.733 "mask": "0x20000", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 }, 00:05:21.733 "scheduler": { 00:05:21.733 "mask": "0x40000", 00:05:21.733 "tpoint_mask": "0x0" 00:05:21.733 } 00:05:21.733 }' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:21.733 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:21.994 22:36:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:21.994 00:05:21.994 real 0m0.182s 00:05:21.994 user 0m0.165s 00:05:21.994 sys 0m0.010s 00:05:21.994 22:36:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.994 22:36:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.994 ************************************ 00:05:21.994 END TEST rpc_trace_cmd_test 00:05:21.994 ************************************ 00:05:21.994 22:36:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:21.994 22:36:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:21.994 22:36:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:21.994 22:36:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.994 22:36:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.994 22:36:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.994 ************************************ 00:05:21.994 START TEST rpc_daemon_integrity 00:05:21.994 ************************************ 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.994 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.995 { 00:05:21.995 "name": "Malloc2", 00:05:21.995 "aliases": [ 00:05:21.995 "8ac934c0-be0c-4e0d-b129-ced39b38525c" 00:05:21.995 ], 00:05:21.995 "product_name": "Malloc disk", 00:05:21.995 "block_size": 512, 00:05:21.995 "num_blocks": 16384, 00:05:21.995 "uuid": "8ac934c0-be0c-4e0d-b129-ced39b38525c", 00:05:21.995 "assigned_rate_limits": { 00:05:21.995 "rw_ios_per_sec": 0, 00:05:21.995 "rw_mbytes_per_sec": 0, 00:05:21.995 "r_mbytes_per_sec": 0, 00:05:21.995 "w_mbytes_per_sec": 0 00:05:21.995 }, 00:05:21.995 "claimed": false, 00:05:21.995 "zoned": false, 00:05:21.995 "supported_io_types": { 00:05:21.995 "read": true, 00:05:21.995 "write": true, 00:05:21.995 "unmap": true, 00:05:21.995 "flush": true, 00:05:21.995 "reset": true, 00:05:21.995 "nvme_admin": false, 00:05:21.995 "nvme_io": false, 00:05:21.995 "nvme_io_md": false, 00:05:21.995 "write_zeroes": true, 00:05:21.995 "zcopy": true, 00:05:21.995 "get_zone_info": false, 00:05:21.995 "zone_management": false, 00:05:21.995 "zone_append": false, 00:05:21.995 "compare": false, 00:05:21.995 "compare_and_write": false, 00:05:21.995 "abort": true, 00:05:21.995 "seek_hole": false, 00:05:21.995 "seek_data": false, 00:05:21.995 "copy": true, 00:05:21.995 "nvme_iov_md": false 00:05:21.995 }, 00:05:21.995 "memory_domains": [ 00:05:21.995 { 00:05:21.995 "dma_device_id": "system", 00:05:21.995 "dma_device_type": 1 00:05:21.995 }, 00:05:21.995 { 00:05:21.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.995 "dma_device_type": 2 00:05:21.995 } 00:05:21.995 ], 00:05:21.995 "driver_specific": {} 00:05:21.995 } 00:05:21.995 ]' 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.995 [2024-12-10 22:36:29.606470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:21.995 [2024-12-10 22:36:29.606519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.995 [2024-12-10 22:36:29.606539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16aa060 00:05:21.995 [2024-12-10 22:36:29.606572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.995 [2024-12-10 22:36:29.607874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.995 [2024-12-10 22:36:29.607898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.995 Passthru0 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.995 { 00:05:21.995 "name": "Malloc2", 00:05:21.995 "aliases": [ 00:05:21.995 "8ac934c0-be0c-4e0d-b129-ced39b38525c" 00:05:21.995 ], 00:05:21.995 "product_name": "Malloc disk", 00:05:21.995 "block_size": 512, 00:05:21.995 "num_blocks": 16384, 00:05:21.995 "uuid": "8ac934c0-be0c-4e0d-b129-ced39b38525c", 00:05:21.995 "assigned_rate_limits": { 00:05:21.995 "rw_ios_per_sec": 0, 00:05:21.995 "rw_mbytes_per_sec": 0, 00:05:21.995 "r_mbytes_per_sec": 0, 00:05:21.995 "w_mbytes_per_sec": 0 00:05:21.995 }, 00:05:21.995 "claimed": true, 00:05:21.995 "claim_type": "exclusive_write", 00:05:21.995 "zoned": false, 00:05:21.995 "supported_io_types": { 00:05:21.995 "read": true, 00:05:21.995 "write": true, 00:05:21.995 "unmap": true, 00:05:21.995 "flush": true, 00:05:21.995 "reset": true, 00:05:21.995 "nvme_admin": false, 00:05:21.995 "nvme_io": false, 00:05:21.995 "nvme_io_md": false, 00:05:21.995 "write_zeroes": true, 00:05:21.995 "zcopy": true, 00:05:21.995 "get_zone_info": false, 00:05:21.995 "zone_management": false, 00:05:21.995 "zone_append": false, 00:05:21.995 "compare": false, 00:05:21.995 "compare_and_write": false, 00:05:21.995 "abort": true, 00:05:21.995 "seek_hole": false, 00:05:21.995 "seek_data": false, 00:05:21.995 "copy": true, 00:05:21.995 "nvme_iov_md": false 00:05:21.995 }, 00:05:21.995 "memory_domains": [ 00:05:21.995 { 00:05:21.995 "dma_device_id": "system", 00:05:21.995 "dma_device_type": 1 00:05:21.995 }, 00:05:21.995 { 00:05:21.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.995 "dma_device_type": 2 00:05:21.995 } 00:05:21.995 ], 00:05:21.995 "driver_specific": {} 00:05:21.995 }, 00:05:21.995 { 00:05:21.995 "name": "Passthru0", 00:05:21.995 "aliases": [ 00:05:21.995 "46887768-96c0-5a8b-8ee5-9ff077c14c52" 00:05:21.995 ], 00:05:21.995 "product_name": "passthru", 00:05:21.995 "block_size": 512, 00:05:21.995 "num_blocks": 16384, 00:05:21.995 "uuid": "46887768-96c0-5a8b-8ee5-9ff077c14c52", 00:05:21.995 "assigned_rate_limits": { 00:05:21.995 "rw_ios_per_sec": 0, 00:05:21.995 "rw_mbytes_per_sec": 0, 00:05:21.995 "r_mbytes_per_sec": 0, 00:05:21.995 "w_mbytes_per_sec": 0 00:05:21.995 }, 00:05:21.995 "claimed": false, 00:05:21.995 "zoned": false, 00:05:21.995 "supported_io_types": { 00:05:21.995 "read": true, 00:05:21.995 "write": true, 00:05:21.995 "unmap": true, 00:05:21.995 "flush": true, 00:05:21.995 "reset": true, 00:05:21.995 "nvme_admin": false, 00:05:21.995 "nvme_io": false, 00:05:21.995 "nvme_io_md": false, 00:05:21.995 "write_zeroes": true, 00:05:21.995 "zcopy": true, 00:05:21.995 "get_zone_info": false, 00:05:21.995 "zone_management": false, 00:05:21.995 "zone_append": false, 00:05:21.995 "compare": false, 00:05:21.995 "compare_and_write": false, 00:05:21.995 "abort": true, 00:05:21.995 "seek_hole": false, 00:05:21.995 "seek_data": false, 00:05:21.995 "copy": true, 00:05:21.995 "nvme_iov_md": false 00:05:21.995 }, 00:05:21.995 "memory_domains": [ 00:05:21.995 { 00:05:21.995 "dma_device_id": "system", 00:05:21.995 "dma_device_type": 1 00:05:21.995 }, 00:05:21.995 { 00:05:21.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.995 "dma_device_type": 2 00:05:21.995 } 00:05:21.995 ], 00:05:21.995 "driver_specific": { 00:05:21.995 "passthru": { 00:05:21.995 "name": "Passthru0", 00:05:21.995 "base_bdev_name": "Malloc2" 00:05:21.995 } 00:05:21.995 } 00:05:21.995 } 00:05:21.995 ]' 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.995 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.996 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.996 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.996 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.996 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.996 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.256 22:36:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.256 00:05:22.256 real 0m0.217s 00:05:22.256 user 0m0.142s 00:05:22.256 sys 0m0.020s 00:05:22.256 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.256 22:36:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.256 ************************************ 00:05:22.256 END TEST rpc_daemon_integrity 00:05:22.256 ************************************ 00:05:22.256 22:36:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.256 22:36:29 rpc -- rpc/rpc.sh@84 -- # killprocess 4132692 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 4132692 ']' 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@958 -- # kill -0 4132692 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4132692 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4132692' 00:05:22.256 killing process with pid 4132692 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@973 -- # kill 4132692 00:05:22.256 22:36:29 rpc -- common/autotest_common.sh@978 -- # wait 4132692 00:05:22.515 00:05:22.515 real 0m1.936s 00:05:22.515 user 0m2.428s 00:05:22.515 sys 0m0.559s 00:05:22.515 22:36:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.515 22:36:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.515 ************************************ 00:05:22.515 END TEST rpc 00:05:22.515 ************************************ 00:05:22.515 22:36:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:22.515 22:36:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.515 22:36:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.515 22:36:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.774 ************************************ 00:05:22.774 START TEST skip_rpc 00:05:22.774 ************************************ 00:05:22.774 22:36:30 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:22.774 * Looking for test storage... 00:05:22.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:22.774 22:36:30 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.774 22:36:30 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.774 22:36:30 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.774 22:36:30 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.774 22:36:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:22.774 22:36:30 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.774 22:36:30 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.774 --rc genhtml_branch_coverage=1 00:05:22.775 --rc genhtml_function_coverage=1 00:05:22.775 --rc genhtml_legend=1 00:05:22.775 --rc geninfo_all_blocks=1 00:05:22.775 --rc geninfo_unexecuted_blocks=1 00:05:22.775 00:05:22.775 ' 00:05:22.775 22:36:30 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.775 --rc genhtml_branch_coverage=1 00:05:22.775 --rc genhtml_function_coverage=1 00:05:22.775 --rc genhtml_legend=1 00:05:22.775 --rc geninfo_all_blocks=1 00:05:22.775 --rc geninfo_unexecuted_blocks=1 00:05:22.775 00:05:22.775 ' 00:05:22.775 22:36:30 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.775 --rc genhtml_branch_coverage=1 00:05:22.775 --rc genhtml_function_coverage=1 00:05:22.775 --rc genhtml_legend=1 00:05:22.775 --rc geninfo_all_blocks=1 00:05:22.775 --rc geninfo_unexecuted_blocks=1 00:05:22.775 00:05:22.775 ' 00:05:22.775 22:36:30 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.775 --rc genhtml_branch_coverage=1 00:05:22.775 --rc genhtml_function_coverage=1 00:05:22.775 --rc genhtml_legend=1 00:05:22.775 --rc geninfo_all_blocks=1 00:05:22.775 --rc geninfo_unexecuted_blocks=1 00:05:22.775 00:05:22.775 ' 00:05:22.775 22:36:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.775 22:36:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.775 22:36:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:22.775 22:36:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.775 22:36:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.775 22:36:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.775 ************************************ 00:05:22.775 START TEST skip_rpc 00:05:22.775 ************************************ 00:05:22.775 22:36:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:22.775 22:36:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4133013 00:05:22.775 22:36:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:22.775 22:36:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.775 22:36:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:22.775 [2024-12-10 22:36:30.499569] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:22.775 [2024-12-10 22:36:30.499644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133013 ] 00:05:23.035 [2024-12-10 22:36:30.573067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.035 [2024-12-10 22:36:30.630091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4133013 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 4133013 ']' 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 4133013 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133013 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133013' 00:05:28.326 killing process with pid 4133013 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 4133013 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 4133013 00:05:28.326 00:05:28.326 real 0m5.462s 00:05:28.326 user 0m5.158s 00:05:28.326 sys 0m0.313s 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.326 22:36:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.326 ************************************ 00:05:28.326 END TEST skip_rpc 00:05:28.326 ************************************ 00:05:28.326 22:36:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.326 22:36:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.326 22:36:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.326 22:36:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.326 ************************************ 00:05:28.326 START TEST skip_rpc_with_json 00:05:28.326 ************************************ 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4133704 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4133704 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 4133704 ']' 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.326 22:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.326 [2024-12-10 22:36:36.009148] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:28.326 [2024-12-10 22:36:36.009241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133704 ] 00:05:28.584 [2024-12-10 22:36:36.092808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.585 [2024-12-10 22:36:36.168271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 [2024-12-10 22:36:36.470327] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:28.844 request: 00:05:28.844 { 00:05:28.844 "trtype": "tcp", 00:05:28.844 "method": "nvmf_get_transports", 00:05:28.844 "req_id": 1 00:05:28.844 } 00:05:28.844 Got JSON-RPC error response 00:05:28.844 response: 00:05:28.844 { 00:05:28.844 "code": -19, 00:05:28.844 "message": "No such device" 00:05:28.844 } 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 [2024-12-10 22:36:36.478432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.844 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.105 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.105 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.105 { 00:05:29.105 "subsystems": [ 00:05:29.105 { 00:05:29.105 "subsystem": "fsdev", 00:05:29.105 "config": [ 00:05:29.105 { 00:05:29.105 "method": "fsdev_set_opts", 00:05:29.105 "params": { 00:05:29.105 "fsdev_io_pool_size": 65535, 00:05:29.105 "fsdev_io_cache_size": 256 00:05:29.105 } 00:05:29.105 } 00:05:29.105 ] 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "subsystem": "vfio_user_target", 00:05:29.105 "config": null 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "subsystem": "keyring", 00:05:29.105 "config": [] 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "subsystem": "iobuf", 00:05:29.105 "config": [ 00:05:29.105 { 00:05:29.105 "method": "iobuf_set_options", 00:05:29.105 "params": { 00:05:29.105 "small_pool_count": 8192, 00:05:29.105 "large_pool_count": 1024, 00:05:29.105 "small_bufsize": 8192, 00:05:29.105 "large_bufsize": 135168, 00:05:29.105 "enable_numa": false 00:05:29.105 } 00:05:29.105 } 00:05:29.105 ] 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "subsystem": "sock", 00:05:29.105 "config": [ 00:05:29.105 { 00:05:29.105 "method": "sock_set_default_impl", 00:05:29.105 "params": { 00:05:29.105 "impl_name": "posix" 00:05:29.105 } 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "method": "sock_impl_set_options", 00:05:29.105 "params": { 00:05:29.105 "impl_name": "ssl", 00:05:29.105 "recv_buf_size": 4096, 00:05:29.105 "send_buf_size": 4096, 00:05:29.105 "enable_recv_pipe": true, 00:05:29.105 "enable_quickack": false, 00:05:29.105 "enable_placement_id": 0, 00:05:29.105 "enable_zerocopy_send_server": true, 00:05:29.105 "enable_zerocopy_send_client": false, 00:05:29.105 "zerocopy_threshold": 0, 00:05:29.105 "tls_version": 0, 00:05:29.105 "enable_ktls": false 00:05:29.105 } 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "method": "sock_impl_set_options", 00:05:29.105 "params": { 00:05:29.105 "impl_name": "posix", 00:05:29.105 "recv_buf_size": 2097152, 00:05:29.105 "send_buf_size": 2097152, 00:05:29.105 "enable_recv_pipe": true, 00:05:29.105 "enable_quickack": false, 00:05:29.105 "enable_placement_id": 0, 00:05:29.105 "enable_zerocopy_send_server": true, 00:05:29.105 "enable_zerocopy_send_client": false, 00:05:29.105 "zerocopy_threshold": 0, 00:05:29.105 "tls_version": 0, 00:05:29.105 "enable_ktls": false 00:05:29.105 } 00:05:29.105 } 00:05:29.105 ] 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "subsystem": "vmd", 00:05:29.105 "config": [] 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "subsystem": "accel", 00:05:29.105 "config": [ 00:05:29.105 { 00:05:29.105 "method": "accel_set_options", 00:05:29.105 "params": { 00:05:29.105 "small_cache_size": 128, 00:05:29.105 "large_cache_size": 16, 00:05:29.105 "task_count": 2048, 00:05:29.105 "sequence_count": 2048, 00:05:29.105 "buf_count": 2048 00:05:29.105 } 00:05:29.105 } 00:05:29.105 ] 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "subsystem": "bdev", 00:05:29.105 "config": [ 00:05:29.105 { 00:05:29.105 "method": "bdev_set_options", 00:05:29.105 "params": { 00:05:29.105 "bdev_io_pool_size": 65535, 00:05:29.105 "bdev_io_cache_size": 256, 00:05:29.105 "bdev_auto_examine": true, 00:05:29.105 "iobuf_small_cache_size": 128, 00:05:29.105 "iobuf_large_cache_size": 16 00:05:29.105 } 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "method": "bdev_raid_set_options", 00:05:29.105 "params": { 00:05:29.105 "process_window_size_kb": 1024, 00:05:29.105 "process_max_bandwidth_mb_sec": 0 00:05:29.105 } 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "method": "bdev_iscsi_set_options", 00:05:29.105 "params": { 00:05:29.105 "timeout_sec": 30 00:05:29.105 } 00:05:29.105 }, 00:05:29.105 { 00:05:29.105 "method": "bdev_nvme_set_options", 00:05:29.105 "params": { 00:05:29.105 "action_on_timeout": "none", 00:05:29.105 "timeout_us": 0, 00:05:29.105 "timeout_admin_us": 0, 00:05:29.105 "keep_alive_timeout_ms": 10000, 00:05:29.105 "arbitration_burst": 0, 00:05:29.105 "low_priority_weight": 0, 00:05:29.105 "medium_priority_weight": 0, 00:05:29.105 "high_priority_weight": 0, 00:05:29.105 "nvme_adminq_poll_period_us": 10000, 00:05:29.105 "nvme_ioq_poll_period_us": 0, 00:05:29.105 "io_queue_requests": 0, 00:05:29.105 "delay_cmd_submit": true, 00:05:29.105 "transport_retry_count": 4, 00:05:29.105 "bdev_retry_count": 3, 00:05:29.105 "transport_ack_timeout": 0, 00:05:29.105 "ctrlr_loss_timeout_sec": 0, 00:05:29.105 "reconnect_delay_sec": 0, 00:05:29.105 "fast_io_fail_timeout_sec": 0, 00:05:29.105 "disable_auto_failback": false, 00:05:29.105 "generate_uuids": false, 00:05:29.105 "transport_tos": 0, 00:05:29.105 "nvme_error_stat": false, 00:05:29.105 "rdma_srq_size": 0, 00:05:29.105 "io_path_stat": false, 00:05:29.105 "allow_accel_sequence": false, 00:05:29.105 "rdma_max_cq_size": 0, 00:05:29.105 "rdma_cm_event_timeout_ms": 0, 00:05:29.105 "dhchap_digests": [ 00:05:29.105 "sha256", 00:05:29.105 "sha384", 00:05:29.105 "sha512" 00:05:29.105 ], 00:05:29.105 "dhchap_dhgroups": [ 00:05:29.105 "null", 00:05:29.106 "ffdhe2048", 00:05:29.106 "ffdhe3072", 00:05:29.106 "ffdhe4096", 00:05:29.106 "ffdhe6144", 00:05:29.106 "ffdhe8192" 00:05:29.106 ], 00:05:29.106 "rdma_umr_per_io": false 00:05:29.106 } 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "method": "bdev_nvme_set_hotplug", 00:05:29.106 "params": { 00:05:29.106 "period_us": 100000, 00:05:29.106 "enable": false 00:05:29.106 } 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "method": "bdev_wait_for_examine" 00:05:29.106 } 00:05:29.106 ] 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "scsi", 00:05:29.106 "config": null 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "scheduler", 00:05:29.106 "config": [ 00:05:29.106 { 00:05:29.106 "method": "framework_set_scheduler", 00:05:29.106 "params": { 00:05:29.106 "name": "static" 00:05:29.106 } 00:05:29.106 } 00:05:29.106 ] 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "vhost_scsi", 00:05:29.106 "config": [] 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "vhost_blk", 00:05:29.106 "config": [] 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "ublk", 00:05:29.106 "config": [] 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "nbd", 00:05:29.106 "config": [] 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "nvmf", 00:05:29.106 "config": [ 00:05:29.106 { 00:05:29.106 "method": "nvmf_set_config", 00:05:29.106 "params": { 00:05:29.106 "discovery_filter": "match_any", 00:05:29.106 "admin_cmd_passthru": { 00:05:29.106 "identify_ctrlr": false 00:05:29.106 }, 00:05:29.106 "dhchap_digests": [ 00:05:29.106 "sha256", 00:05:29.106 "sha384", 00:05:29.106 "sha512" 00:05:29.106 ], 00:05:29.106 "dhchap_dhgroups": [ 00:05:29.106 "null", 00:05:29.106 "ffdhe2048", 00:05:29.106 "ffdhe3072", 00:05:29.106 "ffdhe4096", 00:05:29.106 "ffdhe6144", 00:05:29.106 "ffdhe8192" 00:05:29.106 ] 00:05:29.106 } 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "method": "nvmf_set_max_subsystems", 00:05:29.106 "params": { 00:05:29.106 "max_subsystems": 1024 00:05:29.106 } 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "method": "nvmf_set_crdt", 00:05:29.106 "params": { 00:05:29.106 "crdt1": 0, 00:05:29.106 "crdt2": 0, 00:05:29.106 "crdt3": 0 00:05:29.106 } 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "method": "nvmf_create_transport", 00:05:29.106 "params": { 00:05:29.106 "trtype": "TCP", 00:05:29.106 "max_queue_depth": 128, 00:05:29.106 "max_io_qpairs_per_ctrlr": 127, 00:05:29.106 "in_capsule_data_size": 4096, 00:05:29.106 "max_io_size": 131072, 00:05:29.106 "io_unit_size": 131072, 00:05:29.106 "max_aq_depth": 128, 00:05:29.106 "num_shared_buffers": 511, 00:05:29.106 "buf_cache_size": 4294967295, 00:05:29.106 "dif_insert_or_strip": false, 00:05:29.106 "zcopy": false, 00:05:29.106 "c2h_success": true, 00:05:29.106 "sock_priority": 0, 00:05:29.106 "abort_timeout_sec": 1, 00:05:29.106 "ack_timeout": 0, 00:05:29.106 "data_wr_pool_size": 0 00:05:29.106 } 00:05:29.106 } 00:05:29.106 ] 00:05:29.106 }, 00:05:29.106 { 00:05:29.106 "subsystem": "iscsi", 00:05:29.106 "config": [ 00:05:29.106 { 00:05:29.106 "method": "iscsi_set_options", 00:05:29.106 "params": { 00:05:29.106 "node_base": "iqn.2016-06.io.spdk", 00:05:29.106 "max_sessions": 128, 00:05:29.106 "max_connections_per_session": 2, 00:05:29.106 "max_queue_depth": 64, 00:05:29.106 "default_time2wait": 2, 00:05:29.106 "default_time2retain": 20, 00:05:29.106 "first_burst_length": 8192, 00:05:29.106 "immediate_data": true, 00:05:29.106 "allow_duplicated_isid": false, 00:05:29.106 "error_recovery_level": 0, 00:05:29.106 "nop_timeout": 60, 00:05:29.106 "nop_in_interval": 30, 00:05:29.106 "disable_chap": false, 00:05:29.106 "require_chap": false, 00:05:29.106 "mutual_chap": false, 00:05:29.106 "chap_group": 0, 00:05:29.106 "max_large_datain_per_connection": 64, 00:05:29.106 "max_r2t_per_connection": 4, 00:05:29.106 "pdu_pool_size": 36864, 00:05:29.106 "immediate_data_pool_size": 16384, 00:05:29.106 "data_out_pool_size": 2048 00:05:29.106 } 00:05:29.106 } 00:05:29.106 ] 00:05:29.106 } 00:05:29.106 ] 00:05:29.106 } 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4133704 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4133704 ']' 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4133704 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133704 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133704' 00:05:29.106 killing process with pid 4133704 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4133704 00:05:29.106 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4133704 00:05:29.701 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4133844 00:05:29.701 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.701 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4133844 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4133844 ']' 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4133844 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133844 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133844' 00:05:35.188 killing process with pid 4133844 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4133844 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4133844 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.188 00:05:35.188 real 0m6.603s 00:05:35.188 user 0m6.355s 00:05:35.188 sys 0m0.687s 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.188 ************************************ 00:05:35.188 END TEST skip_rpc_with_json 00:05:35.188 ************************************ 00:05:35.188 22:36:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:35.188 22:36:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.188 22:36:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.188 22:36:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.188 ************************************ 00:05:35.188 START TEST skip_rpc_with_delay 00:05:35.188 ************************************ 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.188 [2024-12-10 22:36:42.668419] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:35.188 00:05:35.188 real 0m0.077s 00:05:35.188 user 0m0.050s 00:05:35.188 sys 0m0.027s 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.188 22:36:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:35.188 ************************************ 00:05:35.188 END TEST skip_rpc_with_delay 00:05:35.188 ************************************ 00:05:35.188 22:36:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:35.188 22:36:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:35.188 22:36:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:35.188 22:36:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.188 22:36:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.188 22:36:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.188 ************************************ 00:05:35.188 START TEST exit_on_failed_rpc_init 00:05:35.188 ************************************ 00:05:35.188 22:36:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:35.188 22:36:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4134565 00:05:35.188 22:36:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.188 22:36:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4134565 00:05:35.188 22:36:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 4134565 ']' 00:05:35.188 22:36:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.188 22:36:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.189 22:36:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.189 22:36:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.189 22:36:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.189 [2024-12-10 22:36:42.797358] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:35.189 [2024-12-10 22:36:42.797454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134565 ] 00:05:35.189 [2024-12-10 22:36:42.864804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.449 [2024-12-10 22:36:42.923875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.710 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.710 [2024-12-10 22:36:43.243742] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:35.710 [2024-12-10 22:36:43.243844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134689 ] 00:05:35.710 [2024-12-10 22:36:43.311107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.710 [2024-12-10 22:36:43.369899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.710 [2024-12-10 22:36:43.370025] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:35.710 [2024-12-10 22:36:43.370045] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:35.710 [2024-12-10 22:36:43.370056] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4134565 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 4134565 ']' 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 4134565 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134565 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134565' 00:05:35.971 killing process with pid 4134565 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 4134565 00:05:35.971 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 4134565 00:05:36.229 00:05:36.229 real 0m1.148s 00:05:36.229 user 0m1.261s 00:05:36.229 sys 0m0.432s 00:05:36.229 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.229 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.229 ************************************ 00:05:36.229 END TEST exit_on_failed_rpc_init 00:05:36.229 ************************************ 00:05:36.229 22:36:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:36.229 00:05:36.229 real 0m13.649s 00:05:36.229 user 0m12.999s 00:05:36.229 sys 0m1.664s 00:05:36.229 22:36:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.229 22:36:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.230 ************************************ 00:05:36.230 END TEST skip_rpc 00:05:36.230 ************************************ 00:05:36.230 22:36:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.230 22:36:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.230 22:36:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.230 22:36:43 -- common/autotest_common.sh@10 -- # set +x 00:05:36.488 ************************************ 00:05:36.488 START TEST rpc_client 00:05:36.488 ************************************ 00:05:36.488 22:36:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.488 * Looking for test storage... 00:05:36.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.488 22:36:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.488 --rc genhtml_branch_coverage=1 00:05:36.488 --rc genhtml_function_coverage=1 00:05:36.488 --rc genhtml_legend=1 00:05:36.488 --rc geninfo_all_blocks=1 00:05:36.488 --rc geninfo_unexecuted_blocks=1 00:05:36.488 00:05:36.488 ' 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.488 --rc genhtml_branch_coverage=1 00:05:36.488 --rc genhtml_function_coverage=1 00:05:36.488 --rc genhtml_legend=1 00:05:36.488 --rc geninfo_all_blocks=1 00:05:36.488 --rc geninfo_unexecuted_blocks=1 00:05:36.488 00:05:36.488 ' 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.488 --rc genhtml_branch_coverage=1 00:05:36.488 --rc genhtml_function_coverage=1 00:05:36.488 --rc genhtml_legend=1 00:05:36.488 --rc geninfo_all_blocks=1 00:05:36.488 --rc geninfo_unexecuted_blocks=1 00:05:36.488 00:05:36.488 ' 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.488 --rc genhtml_branch_coverage=1 00:05:36.488 --rc genhtml_function_coverage=1 00:05:36.488 --rc genhtml_legend=1 00:05:36.488 --rc geninfo_all_blocks=1 00:05:36.488 --rc geninfo_unexecuted_blocks=1 00:05:36.488 00:05:36.488 ' 00:05:36.488 22:36:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:36.488 OK 00:05:36.488 22:36:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:36.488 00:05:36.488 real 0m0.164s 00:05:36.488 user 0m0.110s 00:05:36.488 sys 0m0.062s 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.488 22:36:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:36.488 ************************************ 00:05:36.488 END TEST rpc_client 00:05:36.488 ************************************ 00:05:36.488 22:36:44 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.488 22:36:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.488 22:36:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.488 22:36:44 -- common/autotest_common.sh@10 -- # set +x 00:05:36.488 ************************************ 00:05:36.488 START TEST json_config 00:05:36.488 ************************************ 00:05:36.488 22:36:44 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.488 22:36:44 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.488 22:36:44 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.488 22:36:44 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.748 22:36:44 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.748 22:36:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.748 22:36:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.748 22:36:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.748 22:36:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.748 22:36:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.748 22:36:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.748 22:36:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.748 22:36:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:36.748 22:36:44 json_config -- scripts/common.sh@345 -- # : 1 00:05:36.748 22:36:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.748 22:36:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.748 22:36:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:36.748 22:36:44 json_config -- scripts/common.sh@353 -- # local d=1 00:05:36.748 22:36:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.748 22:36:44 json_config -- scripts/common.sh@355 -- # echo 1 00:05:36.748 22:36:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.748 22:36:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@353 -- # local d=2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.748 22:36:44 json_config -- scripts/common.sh@355 -- # echo 2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.748 22:36:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.748 22:36:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.748 22:36:44 json_config -- scripts/common.sh@368 -- # return 0 00:05:36.748 22:36:44 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.748 22:36:44 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.748 --rc genhtml_branch_coverage=1 00:05:36.748 --rc genhtml_function_coverage=1 00:05:36.748 --rc genhtml_legend=1 00:05:36.748 --rc geninfo_all_blocks=1 00:05:36.748 --rc geninfo_unexecuted_blocks=1 00:05:36.748 00:05:36.748 ' 00:05:36.748 22:36:44 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.748 --rc genhtml_branch_coverage=1 00:05:36.748 --rc genhtml_function_coverage=1 00:05:36.748 --rc genhtml_legend=1 00:05:36.748 --rc geninfo_all_blocks=1 00:05:36.748 --rc geninfo_unexecuted_blocks=1 00:05:36.748 00:05:36.748 ' 00:05:36.748 22:36:44 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.748 --rc genhtml_branch_coverage=1 00:05:36.748 --rc genhtml_function_coverage=1 00:05:36.748 --rc genhtml_legend=1 00:05:36.748 --rc geninfo_all_blocks=1 00:05:36.748 --rc geninfo_unexecuted_blocks=1 00:05:36.748 00:05:36.748 ' 00:05:36.748 22:36:44 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.748 --rc genhtml_branch_coverage=1 00:05:36.748 --rc genhtml_function_coverage=1 00:05:36.748 --rc genhtml_legend=1 00:05:36.748 --rc geninfo_all_blocks=1 00:05:36.748 --rc geninfo_unexecuted_blocks=1 00:05:36.748 00:05:36.748 ' 00:05:36.748 22:36:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.748 22:36:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.748 22:36:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.748 22:36:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.748 22:36:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.748 22:36:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.748 22:36:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.748 22:36:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.749 22:36:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.749 22:36:44 json_config -- paths/export.sh@5 -- # export PATH 00:05:36.749 22:36:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@51 -- # : 0 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.749 22:36:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:36.749 INFO: JSON configuration test init 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.749 22:36:44 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:36.749 22:36:44 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.749 22:36:44 json_config -- json_config/common.sh@10 -- # shift 00:05:36.749 22:36:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.749 22:36:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.749 22:36:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.749 22:36:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.749 22:36:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.749 22:36:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4134953 00:05:36.749 22:36:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:36.749 22:36:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.749 Waiting for target to run... 00:05:36.749 22:36:44 json_config -- json_config/common.sh@25 -- # waitforlisten 4134953 /var/tmp/spdk_tgt.sock 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 4134953 ']' 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.749 22:36:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.749 [2024-12-10 22:36:44.382851] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:36.749 [2024-12-10 22:36:44.382954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134953 ] 00:05:37.317 [2024-12-10 22:36:44.917622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.317 [2024-12-10 22:36:44.969091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.883 22:36:45 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.883 22:36:45 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:37.883 22:36:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.883 00:05:37.883 22:36:45 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:37.883 22:36:45 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:37.884 22:36:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.884 22:36:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.884 22:36:45 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:37.884 22:36:45 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:37.884 22:36:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.884 22:36:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.884 22:36:45 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:37.884 22:36:45 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:37.884 22:36:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:41.177 22:36:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.177 22:36:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:41.177 22:36:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:41.177 22:36:48 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@54 -- # sort 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:41.178 22:36:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.178 22:36:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:41.178 22:36:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.178 22:36:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:41.178 22:36:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.178 22:36:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.437 MallocForNvmf0 00:05:41.437 22:36:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.437 22:36:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.697 MallocForNvmf1 00:05:41.955 22:36:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.955 22:36:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.213 [2024-12-10 22:36:49.688409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.213 22:36:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.213 22:36:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.471 22:36:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.471 22:36:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.729 22:36:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.729 22:36:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.986 22:36:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:42.987 22:36:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.244 [2024-12-10 22:36:50.808018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.244 22:36:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:43.244 22:36:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.244 22:36:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.244 22:36:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:43.244 22:36:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.244 22:36:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.244 22:36:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:43.244 22:36:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.244 22:36:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.502 MallocBdevForConfigChangeCheck 00:05:43.502 22:36:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:43.502 22:36:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.502 22:36:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.502 22:36:51 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:43.502 22:36:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.070 22:36:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:44.070 INFO: shutting down applications... 00:05:44.070 22:36:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:44.070 22:36:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:44.070 22:36:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:44.070 22:36:51 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:45.980 Calling clear_iscsi_subsystem 00:05:45.980 Calling clear_nvmf_subsystem 00:05:45.980 Calling clear_nbd_subsystem 00:05:45.980 Calling clear_ublk_subsystem 00:05:45.980 Calling clear_vhost_blk_subsystem 00:05:45.980 Calling clear_vhost_scsi_subsystem 00:05:45.980 Calling clear_bdev_subsystem 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@352 -- # break 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:45.980 22:36:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:45.980 22:36:53 json_config -- json_config/common.sh@31 -- # local app=target 00:05:45.980 22:36:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:45.980 22:36:53 json_config -- json_config/common.sh@35 -- # [[ -n 4134953 ]] 00:05:45.980 22:36:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4134953 00:05:45.980 22:36:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:45.980 22:36:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.980 22:36:53 json_config -- json_config/common.sh@41 -- # kill -0 4134953 00:05:45.980 22:36:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.550 22:36:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.551 22:36:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.551 22:36:54 json_config -- json_config/common.sh@41 -- # kill -0 4134953 00:05:46.551 22:36:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.551 22:36:54 json_config -- json_config/common.sh@43 -- # break 00:05:46.551 22:36:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.551 22:36:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.551 SPDK target shutdown done 00:05:46.551 22:36:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:46.551 INFO: relaunching applications... 00:05:46.551 22:36:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.551 22:36:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.551 22:36:54 json_config -- json_config/common.sh@10 -- # shift 00:05:46.551 22:36:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.551 22:36:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.551 22:36:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.551 22:36:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.551 22:36:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.551 22:36:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4136257 00:05:46.551 22:36:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.551 22:36:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.551 Waiting for target to run... 00:05:46.551 22:36:54 json_config -- json_config/common.sh@25 -- # waitforlisten 4136257 /var/tmp/spdk_tgt.sock 00:05:46.551 22:36:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 4136257 ']' 00:05:46.551 22:36:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.551 22:36:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.551 22:36:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.551 22:36:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.551 22:36:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.551 [2024-12-10 22:36:54.205472] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:46.551 [2024-12-10 22:36:54.205586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136257 ] 00:05:47.119 [2024-12-10 22:36:54.569684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.119 [2024-12-10 22:36:54.612699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.458 [2024-12-10 22:36:57.661523] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.458 [2024-12-10 22:36:57.694041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.458 22:36:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.458 22:36:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:50.458 22:36:57 json_config -- json_config/common.sh@26 -- # echo '' 00:05:50.458 00:05:50.458 22:36:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:50.458 22:36:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:50.458 INFO: Checking if target configuration is the same... 00:05:50.458 22:36:57 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.458 22:36:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:50.458 22:36:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.458 + '[' 2 -ne 2 ']' 00:05:50.458 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.458 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:50.458 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:50.458 +++ basename /dev/fd/62 00:05:50.458 ++ mktemp /tmp/62.XXX 00:05:50.458 + tmp_file_1=/tmp/62.gRW 00:05:50.458 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.458 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.458 + tmp_file_2=/tmp/spdk_tgt_config.json.Bi8 00:05:50.458 + ret=0 00:05:50.458 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.458 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.743 + diff -u /tmp/62.gRW /tmp/spdk_tgt_config.json.Bi8 00:05:50.743 + echo 'INFO: JSON config files are the same' 00:05:50.743 INFO: JSON config files are the same 00:05:50.743 + rm /tmp/62.gRW /tmp/spdk_tgt_config.json.Bi8 00:05:50.743 + exit 0 00:05:50.743 22:36:58 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:50.743 22:36:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:50.743 INFO: changing configuration and checking if this can be detected... 00:05:50.743 22:36:58 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.743 22:36:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.743 22:36:58 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.743 22:36:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:50.743 22:36:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.743 + '[' 2 -ne 2 ']' 00:05:50.743 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.743 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:50.743 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:50.743 +++ basename /dev/fd/62 00:05:50.743 ++ mktemp /tmp/62.XXX 00:05:50.743 + tmp_file_1=/tmp/62.R3Q 00:05:50.743 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.743 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.743 + tmp_file_2=/tmp/spdk_tgt_config.json.rYS 00:05:50.743 + ret=0 00:05:50.743 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.315 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.315 + diff -u /tmp/62.R3Q /tmp/spdk_tgt_config.json.rYS 00:05:51.315 + ret=1 00:05:51.315 + echo '=== Start of file: /tmp/62.R3Q ===' 00:05:51.315 + cat /tmp/62.R3Q 00:05:51.315 + echo '=== End of file: /tmp/62.R3Q ===' 00:05:51.315 + echo '' 00:05:51.315 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rYS ===' 00:05:51.315 + cat /tmp/spdk_tgt_config.json.rYS 00:05:51.315 + echo '=== End of file: /tmp/spdk_tgt_config.json.rYS ===' 00:05:51.315 + echo '' 00:05:51.315 + rm /tmp/62.R3Q /tmp/spdk_tgt_config.json.rYS 00:05:51.315 + exit 1 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:51.315 INFO: configuration change detected. 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 4136257 ]] 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.315 22:36:58 json_config -- json_config/json_config.sh@330 -- # killprocess 4136257 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 4136257 ']' 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@958 -- # kill -0 4136257 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@959 -- # uname 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4136257 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4136257' 00:05:51.315 killing process with pid 4136257 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@973 -- # kill 4136257 00:05:51.315 22:36:58 json_config -- common/autotest_common.sh@978 -- # wait 4136257 00:05:53.226 22:37:00 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.226 22:37:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:53.226 22:37:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.226 22:37:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.226 22:37:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:53.226 22:37:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:53.226 INFO: Success 00:05:53.226 00:05:53.226 real 0m16.476s 00:05:53.226 user 0m18.145s 00:05:53.226 sys 0m2.617s 00:05:53.226 22:37:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.226 22:37:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.226 ************************************ 00:05:53.226 END TEST json_config 00:05:53.226 ************************************ 00:05:53.226 22:37:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.226 22:37:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.226 22:37:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.226 22:37:00 -- common/autotest_common.sh@10 -- # set +x 00:05:53.226 ************************************ 00:05:53.226 START TEST json_config_extra_key 00:05:53.226 ************************************ 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.226 22:37:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.226 --rc genhtml_branch_coverage=1 00:05:53.226 --rc genhtml_function_coverage=1 00:05:53.226 --rc genhtml_legend=1 00:05:53.226 --rc geninfo_all_blocks=1 00:05:53.226 --rc geninfo_unexecuted_blocks=1 00:05:53.226 00:05:53.226 ' 00:05:53.226 22:37:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.226 --rc genhtml_branch_coverage=1 00:05:53.226 --rc genhtml_function_coverage=1 00:05:53.226 --rc genhtml_legend=1 00:05:53.227 --rc geninfo_all_blocks=1 00:05:53.227 --rc geninfo_unexecuted_blocks=1 00:05:53.227 00:05:53.227 ' 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.227 --rc genhtml_branch_coverage=1 00:05:53.227 --rc genhtml_function_coverage=1 00:05:53.227 --rc genhtml_legend=1 00:05:53.227 --rc geninfo_all_blocks=1 00:05:53.227 --rc geninfo_unexecuted_blocks=1 00:05:53.227 00:05:53.227 ' 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.227 --rc genhtml_branch_coverage=1 00:05:53.227 --rc genhtml_function_coverage=1 00:05:53.227 --rc genhtml_legend=1 00:05:53.227 --rc geninfo_all_blocks=1 00:05:53.227 --rc geninfo_unexecuted_blocks=1 00:05:53.227 00:05:53.227 ' 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.227 22:37:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.227 22:37:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.227 22:37:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.227 22:37:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.227 22:37:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.227 22:37:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.227 22:37:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.227 22:37:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:53.227 22:37:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.227 22:37:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:53.227 INFO: launching applications... 00:05:53.227 22:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4137077 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.227 Waiting for target to run... 00:05:53.227 22:37:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4137077 /var/tmp/spdk_tgt.sock 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 4137077 ']' 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.227 22:37:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:53.227 [2024-12-10 22:37:00.891504] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:53.227 [2024-12-10 22:37:00.891605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137077 ] 00:05:53.794 [2024-12-10 22:37:01.402977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.794 [2024-12-10 22:37:01.457047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.360 22:37:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.361 22:37:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:54.361 00:05:54.361 22:37:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:54.361 INFO: shutting down applications... 00:05:54.361 22:37:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4137077 ]] 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4137077 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4137077 00:05:54.361 22:37:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.930 22:37:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.930 22:37:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.930 22:37:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4137077 00:05:54.930 22:37:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.930 22:37:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:54.930 22:37:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.930 22:37:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.930 SPDK target shutdown done 00:05:54.930 22:37:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:54.930 Success 00:05:54.930 00:05:54.930 real 0m1.685s 00:05:54.930 user 0m1.518s 00:05:54.930 sys 0m0.631s 00:05:54.930 22:37:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.930 22:37:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.930 ************************************ 00:05:54.930 END TEST json_config_extra_key 00:05:54.930 ************************************ 00:05:54.930 22:37:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.930 22:37:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.930 22:37:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.930 22:37:02 -- common/autotest_common.sh@10 -- # set +x 00:05:54.930 ************************************ 00:05:54.930 START TEST alias_rpc 00:05:54.930 ************************************ 00:05:54.930 22:37:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.930 * Looking for test storage... 00:05:54.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:54.930 22:37:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.930 22:37:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.930 22:37:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.930 22:37:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.930 22:37:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.931 22:37:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.931 22:37:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.931 22:37:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.931 22:37:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.931 22:37:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.931 --rc genhtml_branch_coverage=1 00:05:54.931 --rc genhtml_function_coverage=1 00:05:54.931 --rc genhtml_legend=1 00:05:54.931 --rc geninfo_all_blocks=1 00:05:54.931 --rc geninfo_unexecuted_blocks=1 00:05:54.931 00:05:54.931 ' 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.931 --rc genhtml_branch_coverage=1 00:05:54.931 --rc genhtml_function_coverage=1 00:05:54.931 --rc genhtml_legend=1 00:05:54.931 --rc geninfo_all_blocks=1 00:05:54.931 --rc geninfo_unexecuted_blocks=1 00:05:54.931 00:05:54.931 ' 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.931 --rc genhtml_branch_coverage=1 00:05:54.931 --rc genhtml_function_coverage=1 00:05:54.931 --rc genhtml_legend=1 00:05:54.931 --rc geninfo_all_blocks=1 00:05:54.931 --rc geninfo_unexecuted_blocks=1 00:05:54.931 00:05:54.931 ' 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.931 --rc genhtml_branch_coverage=1 00:05:54.931 --rc genhtml_function_coverage=1 00:05:54.931 --rc genhtml_legend=1 00:05:54.931 --rc geninfo_all_blocks=1 00:05:54.931 --rc geninfo_unexecuted_blocks=1 00:05:54.931 00:05:54.931 ' 00:05:54.931 22:37:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.931 22:37:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4137395 00:05:54.931 22:37:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.931 22:37:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4137395 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 4137395 ']' 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.931 22:37:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.931 [2024-12-10 22:37:02.624333] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:54.931 [2024-12-10 22:37:02.624427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137395 ] 00:05:55.191 [2024-12-10 22:37:02.691365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.191 [2024-12-10 22:37:02.749034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.449 22:37:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.449 22:37:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.449 22:37:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:55.709 22:37:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4137395 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 4137395 ']' 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 4137395 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4137395 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4137395' 00:05:55.709 killing process with pid 4137395 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 4137395 00:05:55.709 22:37:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 4137395 00:05:56.277 00:05:56.277 real 0m1.348s 00:05:56.277 user 0m1.476s 00:05:56.277 sys 0m0.443s 00:05:56.277 22:37:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.277 22:37:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.277 ************************************ 00:05:56.277 END TEST alias_rpc 00:05:56.277 ************************************ 00:05:56.277 22:37:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:56.277 22:37:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.277 22:37:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.277 22:37:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.277 22:37:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.277 ************************************ 00:05:56.277 START TEST spdkcli_tcp 00:05:56.277 ************************************ 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.277 * Looking for test storage... 00:05:56.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.277 22:37:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.277 --rc genhtml_branch_coverage=1 00:05:56.277 --rc genhtml_function_coverage=1 00:05:56.277 --rc genhtml_legend=1 00:05:56.277 --rc geninfo_all_blocks=1 00:05:56.277 --rc geninfo_unexecuted_blocks=1 00:05:56.277 00:05:56.277 ' 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.277 --rc genhtml_branch_coverage=1 00:05:56.277 --rc genhtml_function_coverage=1 00:05:56.277 --rc genhtml_legend=1 00:05:56.277 --rc geninfo_all_blocks=1 00:05:56.277 --rc geninfo_unexecuted_blocks=1 00:05:56.277 00:05:56.277 ' 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.277 --rc genhtml_branch_coverage=1 00:05:56.277 --rc genhtml_function_coverage=1 00:05:56.277 --rc genhtml_legend=1 00:05:56.277 --rc geninfo_all_blocks=1 00:05:56.277 --rc geninfo_unexecuted_blocks=1 00:05:56.277 00:05:56.277 ' 00:05:56.277 22:37:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.277 --rc genhtml_branch_coverage=1 00:05:56.277 --rc genhtml_function_coverage=1 00:05:56.277 --rc genhtml_legend=1 00:05:56.277 --rc geninfo_all_blocks=1 00:05:56.277 --rc geninfo_unexecuted_blocks=1 00:05:56.277 00:05:56.277 ' 00:05:56.277 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4137592 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:56.278 22:37:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4137592 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 4137592 ']' 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.278 22:37:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.536 [2024-12-10 22:37:04.025923] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:56.536 [2024-12-10 22:37:04.026016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137592 ] 00:05:56.536 [2024-12-10 22:37:04.090826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.536 [2024-12-10 22:37:04.148752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.536 [2024-12-10 22:37:04.148756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.794 22:37:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.794 22:37:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:56.794 22:37:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4137718 00:05:56.794 22:37:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.794 22:37:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:57.054 [ 00:05:57.054 "bdev_malloc_delete", 00:05:57.054 "bdev_malloc_create", 00:05:57.054 "bdev_null_resize", 00:05:57.054 "bdev_null_delete", 00:05:57.054 "bdev_null_create", 00:05:57.054 "bdev_nvme_cuse_unregister", 00:05:57.054 "bdev_nvme_cuse_register", 00:05:57.054 "bdev_opal_new_user", 00:05:57.054 "bdev_opal_set_lock_state", 00:05:57.054 "bdev_opal_delete", 00:05:57.054 "bdev_opal_get_info", 00:05:57.054 "bdev_opal_create", 00:05:57.055 "bdev_nvme_opal_revert", 00:05:57.055 "bdev_nvme_opal_init", 00:05:57.055 "bdev_nvme_send_cmd", 00:05:57.055 "bdev_nvme_set_keys", 00:05:57.055 "bdev_nvme_get_path_iostat", 00:05:57.055 "bdev_nvme_get_mdns_discovery_info", 00:05:57.055 "bdev_nvme_stop_mdns_discovery", 00:05:57.055 "bdev_nvme_start_mdns_discovery", 00:05:57.055 "bdev_nvme_set_multipath_policy", 00:05:57.055 "bdev_nvme_set_preferred_path", 00:05:57.055 "bdev_nvme_get_io_paths", 00:05:57.055 "bdev_nvme_remove_error_injection", 00:05:57.055 "bdev_nvme_add_error_injection", 00:05:57.055 "bdev_nvme_get_discovery_info", 00:05:57.055 "bdev_nvme_stop_discovery", 00:05:57.055 "bdev_nvme_start_discovery", 00:05:57.055 "bdev_nvme_get_controller_health_info", 00:05:57.055 "bdev_nvme_disable_controller", 00:05:57.055 "bdev_nvme_enable_controller", 00:05:57.055 "bdev_nvme_reset_controller", 00:05:57.055 "bdev_nvme_get_transport_statistics", 00:05:57.055 "bdev_nvme_apply_firmware", 00:05:57.055 "bdev_nvme_detach_controller", 00:05:57.055 "bdev_nvme_get_controllers", 00:05:57.055 "bdev_nvme_attach_controller", 00:05:57.055 "bdev_nvme_set_hotplug", 00:05:57.055 "bdev_nvme_set_options", 00:05:57.055 "bdev_passthru_delete", 00:05:57.055 "bdev_passthru_create", 00:05:57.055 "bdev_lvol_set_parent_bdev", 00:05:57.055 "bdev_lvol_set_parent", 00:05:57.055 "bdev_lvol_check_shallow_copy", 00:05:57.055 "bdev_lvol_start_shallow_copy", 00:05:57.055 "bdev_lvol_grow_lvstore", 00:05:57.055 "bdev_lvol_get_lvols", 00:05:57.055 "bdev_lvol_get_lvstores", 00:05:57.055 "bdev_lvol_delete", 00:05:57.055 "bdev_lvol_set_read_only", 00:05:57.055 "bdev_lvol_resize", 00:05:57.055 "bdev_lvol_decouple_parent", 00:05:57.055 "bdev_lvol_inflate", 00:05:57.055 "bdev_lvol_rename", 00:05:57.055 "bdev_lvol_clone_bdev", 00:05:57.055 "bdev_lvol_clone", 00:05:57.055 "bdev_lvol_snapshot", 00:05:57.055 "bdev_lvol_create", 00:05:57.055 "bdev_lvol_delete_lvstore", 00:05:57.055 "bdev_lvol_rename_lvstore", 00:05:57.055 "bdev_lvol_create_lvstore", 00:05:57.055 "bdev_raid_set_options", 00:05:57.055 "bdev_raid_remove_base_bdev", 00:05:57.055 "bdev_raid_add_base_bdev", 00:05:57.055 "bdev_raid_delete", 00:05:57.055 "bdev_raid_create", 00:05:57.055 "bdev_raid_get_bdevs", 00:05:57.055 "bdev_error_inject_error", 00:05:57.055 "bdev_error_delete", 00:05:57.055 "bdev_error_create", 00:05:57.055 "bdev_split_delete", 00:05:57.055 "bdev_split_create", 00:05:57.055 "bdev_delay_delete", 00:05:57.055 "bdev_delay_create", 00:05:57.055 "bdev_delay_update_latency", 00:05:57.055 "bdev_zone_block_delete", 00:05:57.055 "bdev_zone_block_create", 00:05:57.055 "blobfs_create", 00:05:57.055 "blobfs_detect", 00:05:57.055 "blobfs_set_cache_size", 00:05:57.055 "bdev_aio_delete", 00:05:57.055 "bdev_aio_rescan", 00:05:57.055 "bdev_aio_create", 00:05:57.055 "bdev_ftl_set_property", 00:05:57.055 "bdev_ftl_get_properties", 00:05:57.055 "bdev_ftl_get_stats", 00:05:57.055 "bdev_ftl_unmap", 00:05:57.055 "bdev_ftl_unload", 00:05:57.055 "bdev_ftl_delete", 00:05:57.055 "bdev_ftl_load", 00:05:57.055 "bdev_ftl_create", 00:05:57.055 "bdev_virtio_attach_controller", 00:05:57.055 "bdev_virtio_scsi_get_devices", 00:05:57.055 "bdev_virtio_detach_controller", 00:05:57.055 "bdev_virtio_blk_set_hotplug", 00:05:57.055 "bdev_iscsi_delete", 00:05:57.055 "bdev_iscsi_create", 00:05:57.055 "bdev_iscsi_set_options", 00:05:57.055 "accel_error_inject_error", 00:05:57.055 "ioat_scan_accel_module", 00:05:57.055 "dsa_scan_accel_module", 00:05:57.055 "iaa_scan_accel_module", 00:05:57.055 "vfu_virtio_create_fs_endpoint", 00:05:57.055 "vfu_virtio_create_scsi_endpoint", 00:05:57.055 "vfu_virtio_scsi_remove_target", 00:05:57.055 "vfu_virtio_scsi_add_target", 00:05:57.055 "vfu_virtio_create_blk_endpoint", 00:05:57.055 "vfu_virtio_delete_endpoint", 00:05:57.055 "keyring_file_remove_key", 00:05:57.055 "keyring_file_add_key", 00:05:57.055 "keyring_linux_set_options", 00:05:57.055 "fsdev_aio_delete", 00:05:57.055 "fsdev_aio_create", 00:05:57.055 "iscsi_get_histogram", 00:05:57.055 "iscsi_enable_histogram", 00:05:57.055 "iscsi_set_options", 00:05:57.055 "iscsi_get_auth_groups", 00:05:57.055 "iscsi_auth_group_remove_secret", 00:05:57.055 "iscsi_auth_group_add_secret", 00:05:57.055 "iscsi_delete_auth_group", 00:05:57.055 "iscsi_create_auth_group", 00:05:57.055 "iscsi_set_discovery_auth", 00:05:57.055 "iscsi_get_options", 00:05:57.055 "iscsi_target_node_request_logout", 00:05:57.055 "iscsi_target_node_set_redirect", 00:05:57.055 "iscsi_target_node_set_auth", 00:05:57.055 "iscsi_target_node_add_lun", 00:05:57.055 "iscsi_get_stats", 00:05:57.055 "iscsi_get_connections", 00:05:57.055 "iscsi_portal_group_set_auth", 00:05:57.055 "iscsi_start_portal_group", 00:05:57.055 "iscsi_delete_portal_group", 00:05:57.055 "iscsi_create_portal_group", 00:05:57.055 "iscsi_get_portal_groups", 00:05:57.055 "iscsi_delete_target_node", 00:05:57.055 "iscsi_target_node_remove_pg_ig_maps", 00:05:57.055 "iscsi_target_node_add_pg_ig_maps", 00:05:57.055 "iscsi_create_target_node", 00:05:57.055 "iscsi_get_target_nodes", 00:05:57.055 "iscsi_delete_initiator_group", 00:05:57.055 "iscsi_initiator_group_remove_initiators", 00:05:57.055 "iscsi_initiator_group_add_initiators", 00:05:57.056 "iscsi_create_initiator_group", 00:05:57.056 "iscsi_get_initiator_groups", 00:05:57.056 "nvmf_set_crdt", 00:05:57.056 "nvmf_set_config", 00:05:57.056 "nvmf_set_max_subsystems", 00:05:57.056 "nvmf_stop_mdns_prr", 00:05:57.056 "nvmf_publish_mdns_prr", 00:05:57.056 "nvmf_subsystem_get_listeners", 00:05:57.056 "nvmf_subsystem_get_qpairs", 00:05:57.056 "nvmf_subsystem_get_controllers", 00:05:57.056 "nvmf_get_stats", 00:05:57.056 "nvmf_get_transports", 00:05:57.056 "nvmf_create_transport", 00:05:57.056 "nvmf_get_targets", 00:05:57.056 "nvmf_delete_target", 00:05:57.056 "nvmf_create_target", 00:05:57.056 "nvmf_subsystem_allow_any_host", 00:05:57.056 "nvmf_subsystem_set_keys", 00:05:57.056 "nvmf_subsystem_remove_host", 00:05:57.056 "nvmf_subsystem_add_host", 00:05:57.056 "nvmf_ns_remove_host", 00:05:57.056 "nvmf_ns_add_host", 00:05:57.056 "nvmf_subsystem_remove_ns", 00:05:57.056 "nvmf_subsystem_set_ns_ana_group", 00:05:57.056 "nvmf_subsystem_add_ns", 00:05:57.056 "nvmf_subsystem_listener_set_ana_state", 00:05:57.056 "nvmf_discovery_get_referrals", 00:05:57.056 "nvmf_discovery_remove_referral", 00:05:57.056 "nvmf_discovery_add_referral", 00:05:57.056 "nvmf_subsystem_remove_listener", 00:05:57.056 "nvmf_subsystem_add_listener", 00:05:57.056 "nvmf_delete_subsystem", 00:05:57.056 "nvmf_create_subsystem", 00:05:57.056 "nvmf_get_subsystems", 00:05:57.056 "env_dpdk_get_mem_stats", 00:05:57.056 "nbd_get_disks", 00:05:57.056 "nbd_stop_disk", 00:05:57.056 "nbd_start_disk", 00:05:57.056 "ublk_recover_disk", 00:05:57.056 "ublk_get_disks", 00:05:57.056 "ublk_stop_disk", 00:05:57.056 "ublk_start_disk", 00:05:57.056 "ublk_destroy_target", 00:05:57.056 "ublk_create_target", 00:05:57.056 "virtio_blk_create_transport", 00:05:57.056 "virtio_blk_get_transports", 00:05:57.056 "vhost_controller_set_coalescing", 00:05:57.056 "vhost_get_controllers", 00:05:57.056 "vhost_delete_controller", 00:05:57.056 "vhost_create_blk_controller", 00:05:57.056 "vhost_scsi_controller_remove_target", 00:05:57.056 "vhost_scsi_controller_add_target", 00:05:57.056 "vhost_start_scsi_controller", 00:05:57.056 "vhost_create_scsi_controller", 00:05:57.056 "thread_set_cpumask", 00:05:57.056 "scheduler_set_options", 00:05:57.056 "framework_get_governor", 00:05:57.056 "framework_get_scheduler", 00:05:57.056 "framework_set_scheduler", 00:05:57.056 "framework_get_reactors", 00:05:57.056 "thread_get_io_channels", 00:05:57.056 "thread_get_pollers", 00:05:57.056 "thread_get_stats", 00:05:57.056 "framework_monitor_context_switch", 00:05:57.056 "spdk_kill_instance", 00:05:57.056 "log_enable_timestamps", 00:05:57.056 "log_get_flags", 00:05:57.056 "log_clear_flag", 00:05:57.056 "log_set_flag", 00:05:57.056 "log_get_level", 00:05:57.056 "log_set_level", 00:05:57.056 "log_get_print_level", 00:05:57.056 "log_set_print_level", 00:05:57.056 "framework_enable_cpumask_locks", 00:05:57.056 "framework_disable_cpumask_locks", 00:05:57.056 "framework_wait_init", 00:05:57.056 "framework_start_init", 00:05:57.056 "scsi_get_devices", 00:05:57.056 "bdev_get_histogram", 00:05:57.056 "bdev_enable_histogram", 00:05:57.056 "bdev_set_qos_limit", 00:05:57.056 "bdev_set_qd_sampling_period", 00:05:57.056 "bdev_get_bdevs", 00:05:57.056 "bdev_reset_iostat", 00:05:57.056 "bdev_get_iostat", 00:05:57.056 "bdev_examine", 00:05:57.056 "bdev_wait_for_examine", 00:05:57.056 "bdev_set_options", 00:05:57.056 "accel_get_stats", 00:05:57.056 "accel_set_options", 00:05:57.056 "accel_set_driver", 00:05:57.056 "accel_crypto_key_destroy", 00:05:57.056 "accel_crypto_keys_get", 00:05:57.056 "accel_crypto_key_create", 00:05:57.056 "accel_assign_opc", 00:05:57.056 "accel_get_module_info", 00:05:57.056 "accel_get_opc_assignments", 00:05:57.056 "vmd_rescan", 00:05:57.056 "vmd_remove_device", 00:05:57.056 "vmd_enable", 00:05:57.056 "sock_get_default_impl", 00:05:57.056 "sock_set_default_impl", 00:05:57.056 "sock_impl_set_options", 00:05:57.056 "sock_impl_get_options", 00:05:57.056 "iobuf_get_stats", 00:05:57.056 "iobuf_set_options", 00:05:57.056 "keyring_get_keys", 00:05:57.056 "vfu_tgt_set_base_path", 00:05:57.056 "framework_get_pci_devices", 00:05:57.056 "framework_get_config", 00:05:57.056 "framework_get_subsystems", 00:05:57.056 "fsdev_set_opts", 00:05:57.056 "fsdev_get_opts", 00:05:57.056 "trace_get_info", 00:05:57.056 "trace_get_tpoint_group_mask", 00:05:57.056 "trace_disable_tpoint_group", 00:05:57.056 "trace_enable_tpoint_group", 00:05:57.056 "trace_clear_tpoint_mask", 00:05:57.056 "trace_set_tpoint_mask", 00:05:57.056 "notify_get_notifications", 00:05:57.056 "notify_get_types", 00:05:57.056 "spdk_get_version", 00:05:57.056 "rpc_get_methods" 00:05:57.056 ] 00:05:57.056 22:37:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.056 22:37:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:57.056 22:37:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4137592 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 4137592 ']' 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 4137592 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4137592 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4137592' 00:05:57.056 killing process with pid 4137592 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 4137592 00:05:57.056 22:37:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 4137592 00:05:57.626 00:05:57.626 real 0m1.335s 00:05:57.626 user 0m2.395s 00:05:57.626 sys 0m0.457s 00:05:57.626 22:37:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.626 22:37:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.626 ************************************ 00:05:57.626 END TEST spdkcli_tcp 00:05:57.626 ************************************ 00:05:57.626 22:37:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.626 22:37:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.626 22:37:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.626 22:37:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.626 ************************************ 00:05:57.626 START TEST dpdk_mem_utility 00:05:57.626 ************************************ 00:05:57.626 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.626 * Looking for test storage... 00:05:57.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:57.626 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.626 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.626 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.626 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:57.626 22:37:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:57.885 22:37:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.885 22:37:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:57.885 22:37:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.885 22:37:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.885 22:37:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.885 22:37:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.885 --rc genhtml_branch_coverage=1 00:05:57.885 --rc genhtml_function_coverage=1 00:05:57.885 --rc genhtml_legend=1 00:05:57.885 --rc geninfo_all_blocks=1 00:05:57.885 --rc geninfo_unexecuted_blocks=1 00:05:57.885 00:05:57.885 ' 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.885 --rc genhtml_branch_coverage=1 00:05:57.885 --rc genhtml_function_coverage=1 00:05:57.885 --rc genhtml_legend=1 00:05:57.885 --rc geninfo_all_blocks=1 00:05:57.885 --rc geninfo_unexecuted_blocks=1 00:05:57.885 00:05:57.885 ' 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.885 --rc genhtml_branch_coverage=1 00:05:57.885 --rc genhtml_function_coverage=1 00:05:57.885 --rc genhtml_legend=1 00:05:57.885 --rc geninfo_all_blocks=1 00:05:57.885 --rc geninfo_unexecuted_blocks=1 00:05:57.885 00:05:57.885 ' 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.885 --rc genhtml_branch_coverage=1 00:05:57.885 --rc genhtml_function_coverage=1 00:05:57.885 --rc genhtml_legend=1 00:05:57.885 --rc geninfo_all_blocks=1 00:05:57.885 --rc geninfo_unexecuted_blocks=1 00:05:57.885 00:05:57.885 ' 00:05:57.885 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:57.885 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4137855 00:05:57.885 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.885 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4137855 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 4137855 ']' 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.885 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.885 [2024-12-10 22:37:05.414938] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:57.885 [2024-12-10 22:37:05.415046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137855 ] 00:05:57.885 [2024-12-10 22:37:05.498693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.885 [2024-12-10 22:37:05.568882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.145 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.145 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:58.145 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.145 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.145 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.145 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.145 { 00:05:58.145 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.145 } 00:05:58.145 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.145 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:58.406 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:58.406 1 heaps totaling size 818.000000 MiB 00:05:58.406 size: 818.000000 MiB heap id: 0 00:05:58.406 end heaps---------- 00:05:58.406 9 mempools totaling size 603.782043 MiB 00:05:58.406 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.406 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.406 size: 100.555481 MiB name: bdev_io_4137855 00:05:58.406 size: 50.003479 MiB name: msgpool_4137855 00:05:58.406 size: 36.509338 MiB name: fsdev_io_4137855 00:05:58.406 size: 21.763794 MiB name: PDU_Pool 00:05:58.406 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.406 size: 4.133484 MiB name: evtpool_4137855 00:05:58.406 size: 0.026123 MiB name: Session_Pool 00:05:58.406 end mempools------- 00:05:58.406 6 memzones totaling size 4.142822 MiB 00:05:58.406 size: 1.000366 MiB name: RG_ring_0_4137855 00:05:58.406 size: 1.000366 MiB name: RG_ring_1_4137855 00:05:58.406 size: 1.000366 MiB name: RG_ring_4_4137855 00:05:58.406 size: 1.000366 MiB name: RG_ring_5_4137855 00:05:58.406 size: 0.125366 MiB name: RG_ring_2_4137855 00:05:58.406 size: 0.015991 MiB name: RG_ring_3_4137855 00:05:58.406 end memzones------- 00:05:58.406 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.406 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:58.406 list of free elements. size: 10.852478 MiB 00:05:58.406 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:58.406 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:58.406 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:58.406 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:58.406 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:58.406 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:58.406 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:58.406 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:58.406 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:58.406 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:58.406 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:58.406 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:58.406 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:58.406 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:58.406 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:58.406 list of standard malloc elements. size: 199.218628 MiB 00:05:58.406 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:58.406 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:58.406 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:58.406 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:58.406 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:58.406 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:58.406 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:58.406 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:58.406 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:58.406 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:58.406 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:58.406 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:58.406 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:58.406 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:58.406 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:58.406 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:58.406 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:58.406 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:58.406 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:58.407 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:58.407 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:58.407 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:58.407 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:58.407 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:58.407 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:58.407 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:58.407 list of memzone associated elements. size: 607.928894 MiB 00:05:58.407 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:58.407 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.407 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:58.407 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.407 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:58.407 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_4137855_0 00:05:58.407 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:58.407 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4137855_0 00:05:58.407 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:58.407 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4137855_0 00:05:58.407 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:58.407 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.407 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:58.407 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.407 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:58.407 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4137855_0 00:05:58.407 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:58.407 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4137855 00:05:58.407 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:58.407 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4137855 00:05:58.407 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:58.407 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.407 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:58.407 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.407 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:58.407 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.407 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:58.407 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.407 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:58.407 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4137855 00:05:58.407 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:58.407 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4137855 00:05:58.407 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:58.407 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4137855 00:05:58.407 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:58.407 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4137855 00:05:58.407 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:58.407 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4137855 00:05:58.407 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:58.407 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4137855 00:05:58.407 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:58.407 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.407 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:58.407 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.407 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:58.407 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.407 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:58.407 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4137855 00:05:58.407 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:58.407 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4137855 00:05:58.407 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:58.407 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.407 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:58.407 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.407 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:58.407 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4137855 00:05:58.407 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:58.407 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.407 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:58.407 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4137855 00:05:58.407 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:58.407 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4137855 00:05:58.407 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:58.407 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4137855 00:05:58.407 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:58.407 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.407 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.407 22:37:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4137855 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 4137855 ']' 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 4137855 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4137855 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4137855' 00:05:58.407 killing process with pid 4137855 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 4137855 00:05:58.407 22:37:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 4137855 00:05:58.978 00:05:58.978 real 0m1.197s 00:05:58.978 user 0m1.145s 00:05:58.978 sys 0m0.451s 00:05:58.978 22:37:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.978 22:37:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.978 ************************************ 00:05:58.978 END TEST dpdk_mem_utility 00:05:58.978 ************************************ 00:05:58.978 22:37:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:58.978 22:37:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.978 22:37:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.978 22:37:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.978 ************************************ 00:05:58.978 START TEST event 00:05:58.978 ************************************ 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:58.978 * Looking for test storage... 00:05:58.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:58.978 22:37:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.978 22:37:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.978 22:37:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.978 22:37:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.978 22:37:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.978 22:37:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.978 22:37:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.978 22:37:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.978 22:37:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.978 22:37:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.978 22:37:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.978 22:37:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:58.978 22:37:06 event -- scripts/common.sh@345 -- # : 1 00:05:58.978 22:37:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.978 22:37:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.978 22:37:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:58.978 22:37:06 event -- scripts/common.sh@353 -- # local d=1 00:05:58.978 22:37:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.978 22:37:06 event -- scripts/common.sh@355 -- # echo 1 00:05:58.978 22:37:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.978 22:37:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:58.978 22:37:06 event -- scripts/common.sh@353 -- # local d=2 00:05:58.978 22:37:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.978 22:37:06 event -- scripts/common.sh@355 -- # echo 2 00:05:58.978 22:37:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.978 22:37:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.978 22:37:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.978 22:37:06 event -- scripts/common.sh@368 -- # return 0 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:58.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.978 --rc genhtml_branch_coverage=1 00:05:58.978 --rc genhtml_function_coverage=1 00:05:58.978 --rc genhtml_legend=1 00:05:58.978 --rc geninfo_all_blocks=1 00:05:58.978 --rc geninfo_unexecuted_blocks=1 00:05:58.978 00:05:58.978 ' 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:58.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.978 --rc genhtml_branch_coverage=1 00:05:58.978 --rc genhtml_function_coverage=1 00:05:58.978 --rc genhtml_legend=1 00:05:58.978 --rc geninfo_all_blocks=1 00:05:58.978 --rc geninfo_unexecuted_blocks=1 00:05:58.978 00:05:58.978 ' 00:05:58.978 22:37:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:58.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.978 --rc genhtml_branch_coverage=1 00:05:58.978 --rc genhtml_function_coverage=1 00:05:58.978 --rc genhtml_legend=1 00:05:58.978 --rc geninfo_all_blocks=1 00:05:58.978 --rc geninfo_unexecuted_blocks=1 00:05:58.978 00:05:58.978 ' 00:05:58.979 22:37:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:58.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.979 --rc genhtml_branch_coverage=1 00:05:58.979 --rc genhtml_function_coverage=1 00:05:58.979 --rc genhtml_legend=1 00:05:58.979 --rc geninfo_all_blocks=1 00:05:58.979 --rc geninfo_unexecuted_blocks=1 00:05:58.979 00:05:58.979 ' 00:05:58.979 22:37:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:58.979 22:37:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.979 22:37:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.979 22:37:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:58.979 22:37:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.979 22:37:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.979 ************************************ 00:05:58.979 START TEST event_perf 00:05:58.979 ************************************ 00:05:58.979 22:37:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.979 Running I/O for 1 seconds...[2024-12-10 22:37:06.634302] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:05:58.979 [2024-12-10 22:37:06.634370] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138124 ] 00:05:58.979 [2024-12-10 22:37:06.704916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.238 [2024-12-10 22:37:06.768961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.238 [2024-12-10 22:37:06.769023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.239 [2024-12-10 22:37:06.769090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.239 [2024-12-10 22:37:06.769093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.176 Running I/O for 1 seconds... 00:06:00.176 lcore 0: 231842 00:06:00.176 lcore 1: 231841 00:06:00.176 lcore 2: 231841 00:06:00.176 lcore 3: 231841 00:06:00.176 done. 00:06:00.176 00:06:00.176 real 0m1.212s 00:06:00.176 user 0m4.130s 00:06:00.176 sys 0m0.073s 00:06:00.176 22:37:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.176 22:37:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.176 ************************************ 00:06:00.176 END TEST event_perf 00:06:00.176 ************************************ 00:06:00.176 22:37:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:00.176 22:37:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:00.176 22:37:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.176 22:37:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.176 ************************************ 00:06:00.176 START TEST event_reactor 00:06:00.176 ************************************ 00:06:00.176 22:37:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:00.176 [2024-12-10 22:37:07.893826] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:00.176 [2024-12-10 22:37:07.893907] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138281 ] 00:06:00.437 [2024-12-10 22:37:07.960245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.437 [2024-12-10 22:37:08.015218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.372 test_start 00:06:01.372 oneshot 00:06:01.372 tick 100 00:06:01.372 tick 100 00:06:01.372 tick 250 00:06:01.372 tick 100 00:06:01.372 tick 100 00:06:01.372 tick 100 00:06:01.372 tick 250 00:06:01.372 tick 500 00:06:01.372 tick 100 00:06:01.372 tick 100 00:06:01.372 tick 250 00:06:01.372 tick 100 00:06:01.372 tick 100 00:06:01.372 test_end 00:06:01.372 00:06:01.372 real 0m1.197s 00:06:01.372 user 0m1.130s 00:06:01.372 sys 0m0.064s 00:06:01.372 22:37:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.372 22:37:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:01.372 ************************************ 00:06:01.372 END TEST event_reactor 00:06:01.372 ************************************ 00:06:01.632 22:37:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.632 22:37:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:01.632 22:37:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.632 22:37:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.632 ************************************ 00:06:01.632 START TEST event_reactor_perf 00:06:01.632 ************************************ 00:06:01.632 22:37:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.632 [2024-12-10 22:37:09.145303] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:01.632 [2024-12-10 22:37:09.145367] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138433 ] 00:06:01.632 [2024-12-10 22:37:09.211374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.632 [2024-12-10 22:37:09.266452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.015 test_start 00:06:03.015 test_end 00:06:03.015 Performance: 436457 events per second 00:06:03.015 00:06:03.015 real 0m1.201s 00:06:03.015 user 0m1.134s 00:06:03.015 sys 0m0.062s 00:06:03.015 22:37:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.015 22:37:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.015 ************************************ 00:06:03.015 END TEST event_reactor_perf 00:06:03.015 ************************************ 00:06:03.015 22:37:10 event -- event/event.sh@49 -- # uname -s 00:06:03.015 22:37:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:03.015 22:37:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:03.015 22:37:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.015 22:37:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.015 22:37:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.015 ************************************ 00:06:03.015 START TEST event_scheduler 00:06:03.015 ************************************ 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:03.015 * Looking for test storage... 00:06:03.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.015 22:37:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.015 --rc genhtml_branch_coverage=1 00:06:03.015 --rc genhtml_function_coverage=1 00:06:03.015 --rc genhtml_legend=1 00:06:03.015 --rc geninfo_all_blocks=1 00:06:03.015 --rc geninfo_unexecuted_blocks=1 00:06:03.015 00:06:03.015 ' 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.015 --rc genhtml_branch_coverage=1 00:06:03.015 --rc genhtml_function_coverage=1 00:06:03.015 --rc genhtml_legend=1 00:06:03.015 --rc geninfo_all_blocks=1 00:06:03.015 --rc geninfo_unexecuted_blocks=1 00:06:03.015 00:06:03.015 ' 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.015 --rc genhtml_branch_coverage=1 00:06:03.015 --rc genhtml_function_coverage=1 00:06:03.015 --rc genhtml_legend=1 00:06:03.015 --rc geninfo_all_blocks=1 00:06:03.015 --rc geninfo_unexecuted_blocks=1 00:06:03.015 00:06:03.015 ' 00:06:03.015 22:37:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.015 --rc genhtml_branch_coverage=1 00:06:03.015 --rc genhtml_function_coverage=1 00:06:03.016 --rc genhtml_legend=1 00:06:03.016 --rc geninfo_all_blocks=1 00:06:03.016 --rc geninfo_unexecuted_blocks=1 00:06:03.016 00:06:03.016 ' 00:06:03.016 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:03.016 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4138621 00:06:03.016 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:03.016 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.016 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4138621 00:06:03.016 22:37:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 4138621 ']' 00:06:03.016 22:37:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.016 22:37:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.016 22:37:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.016 22:37:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.016 22:37:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.016 [2024-12-10 22:37:10.581611] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:03.016 [2024-12-10 22:37:10.581713] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138621 ] 00:06:03.016 [2024-12-10 22:37:10.651411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.016 [2024-12-10 22:37:10.717163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.016 [2024-12-10 22:37:10.717229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.016 [2024-12-10 22:37:10.717293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.016 [2024-12-10 22:37:10.717296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:03.273 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 [2024-12-10 22:37:10.822311] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:03.273 [2024-12-10 22:37:10.822337] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:03.273 [2024-12-10 22:37:10.822355] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:03.273 [2024-12-10 22:37:10.822365] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:03.273 [2024-12-10 22:37:10.822375] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 [2024-12-10 22:37:10.926143] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 ************************************ 00:06:03.273 START TEST scheduler_create_thread 00:06:03.273 ************************************ 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 2 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 3 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 4 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 5 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 6 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.273 7 00:06:03.273 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.273 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:03.273 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.273 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 8 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 9 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 10 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.533 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.103 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.103 00:06:04.103 real 0m0.589s 00:06:04.103 user 0m0.010s 00:06:04.103 sys 0m0.003s 00:06:04.103 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.103 22:37:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.103 ************************************ 00:06:04.103 END TEST scheduler_create_thread 00:06:04.103 ************************************ 00:06:04.103 22:37:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:04.103 22:37:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4138621 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 4138621 ']' 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 4138621 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4138621 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4138621' 00:06:04.103 killing process with pid 4138621 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 4138621 00:06:04.103 22:37:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 4138621 00:06:04.362 [2024-12-10 22:37:12.022256] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.620 00:06:04.620 real 0m1.844s 00:06:04.620 user 0m2.486s 00:06:04.620 sys 0m0.356s 00:06:04.620 22:37:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.620 22:37:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.620 ************************************ 00:06:04.620 END TEST event_scheduler 00:06:04.620 ************************************ 00:06:04.620 22:37:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.620 22:37:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.620 22:37:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.620 22:37:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.620 22:37:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.620 ************************************ 00:06:04.620 START TEST app_repeat 00:06:04.620 ************************************ 00:06:04.620 22:37:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4138937 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4138937' 00:06:04.620 Process app_repeat pid: 4138937 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.620 spdk_app_start Round 0 00:06:04.620 22:37:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4138937 /var/tmp/spdk-nbd.sock 00:06:04.620 22:37:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4138937 ']' 00:06:04.620 22:37:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.620 22:37:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.620 22:37:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.620 22:37:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.620 22:37:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.620 [2024-12-10 22:37:12.311160] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:04.620 [2024-12-10 22:37:12.311232] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138937 ] 00:06:04.878 [2024-12-10 22:37:12.379639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.878 [2024-12-10 22:37:12.434902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.878 [2024-12-10 22:37:12.434906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.878 22:37:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.878 22:37:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.878 22:37:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.169 Malloc0 00:06:05.169 22:37:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.427 Malloc1 00:06:05.427 22:37:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.427 22:37:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.995 /dev/nbd0 00:06:05.995 22:37:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.995 22:37:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.995 1+0 records in 00:06:05.995 1+0 records out 00:06:05.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173793 s, 23.6 MB/s 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.995 22:37:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.995 22:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.995 22:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.995 22:37:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.254 /dev/nbd1 00:06:06.254 22:37:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.254 22:37:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.254 1+0 records in 00:06:06.254 1+0 records out 00:06:06.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203381 s, 20.1 MB/s 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.254 22:37:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.254 22:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.254 22:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.254 22:37:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.254 22:37:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.254 22:37:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.512 22:37:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.512 { 00:06:06.512 "nbd_device": "/dev/nbd0", 00:06:06.512 "bdev_name": "Malloc0" 00:06:06.512 }, 00:06:06.512 { 00:06:06.512 "nbd_device": "/dev/nbd1", 00:06:06.512 "bdev_name": "Malloc1" 00:06:06.512 } 00:06:06.512 ]' 00:06:06.512 22:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.512 { 00:06:06.513 "nbd_device": "/dev/nbd0", 00:06:06.513 "bdev_name": "Malloc0" 00:06:06.513 }, 00:06:06.513 { 00:06:06.513 "nbd_device": "/dev/nbd1", 00:06:06.513 "bdev_name": "Malloc1" 00:06:06.513 } 00:06:06.513 ]' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.513 /dev/nbd1' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.513 /dev/nbd1' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.513 256+0 records in 00:06:06.513 256+0 records out 00:06:06.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049397 s, 212 MB/s 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.513 256+0 records in 00:06:06.513 256+0 records out 00:06:06.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020591 s, 50.9 MB/s 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.513 256+0 records in 00:06:06.513 256+0 records out 00:06:06.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222884 s, 47.0 MB/s 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.513 22:37:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.771 22:37:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.028 22:37:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.286 22:37:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.286 22:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.286 22:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.553 22:37:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.553 22:37:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.814 22:37:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.074 [2024-12-10 22:37:15.562559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.074 [2024-12-10 22:37:15.616121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.074 [2024-12-10 22:37:15.616121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.075 [2024-12-10 22:37:15.673823] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.075 [2024-12-10 22:37:15.673913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.367 22:37:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.367 22:37:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.367 spdk_app_start Round 1 00:06:11.367 22:37:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4138937 /var/tmp/spdk-nbd.sock 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4138937 ']' 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.367 22:37:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.367 22:37:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.367 Malloc0 00:06:11.367 22:37:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.625 Malloc1 00:06:11.625 22:37:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.625 22:37:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.882 /dev/nbd0 00:06:11.882 22:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.882 22:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.882 1+0 records in 00:06:11.882 1+0 records out 00:06:11.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186939 s, 21.9 MB/s 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.882 22:37:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.882 22:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.882 22:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.882 22:37:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.141 /dev/nbd1 00:06:12.141 22:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.141 22:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.141 1+0 records in 00:06:12.141 1+0 records out 00:06:12.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231772 s, 17.7 MB/s 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.141 22:37:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.141 22:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.141 22:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.141 22:37:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.141 22:37:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.141 22:37:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.707 { 00:06:12.707 "nbd_device": "/dev/nbd0", 00:06:12.707 "bdev_name": "Malloc0" 00:06:12.707 }, 00:06:12.707 { 00:06:12.707 "nbd_device": "/dev/nbd1", 00:06:12.707 "bdev_name": "Malloc1" 00:06:12.707 } 00:06:12.707 ]' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.707 { 00:06:12.707 "nbd_device": "/dev/nbd0", 00:06:12.707 "bdev_name": "Malloc0" 00:06:12.707 }, 00:06:12.707 { 00:06:12.707 "nbd_device": "/dev/nbd1", 00:06:12.707 "bdev_name": "Malloc1" 00:06:12.707 } 00:06:12.707 ]' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.707 /dev/nbd1' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.707 /dev/nbd1' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.707 256+0 records in 00:06:12.707 256+0 records out 00:06:12.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516196 s, 203 MB/s 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.707 256+0 records in 00:06:12.707 256+0 records out 00:06:12.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205154 s, 51.1 MB/s 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.707 256+0 records in 00:06:12.707 256+0 records out 00:06:12.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223948 s, 46.8 MB/s 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.707 22:37:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.965 22:37:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.222 22:37:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.480 22:37:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.480 22:37:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.049 22:37:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.049 [2024-12-10 22:37:21.693262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.049 [2024-12-10 22:37:21.746713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.049 [2024-12-10 22:37:21.746717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.307 [2024-12-10 22:37:21.805179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.307 [2024-12-10 22:37:21.805269] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.842 22:37:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.842 22:37:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:16.842 spdk_app_start Round 2 00:06:16.842 22:37:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4138937 /var/tmp/spdk-nbd.sock 00:06:16.842 22:37:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4138937 ']' 00:06:16.842 22:37:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.842 22:37:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.842 22:37:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.842 22:37:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.842 22:37:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.100 22:37:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.100 22:37:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.100 22:37:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.357 Malloc0 00:06:17.357 22:37:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.614 Malloc1 00:06:17.614 22:37:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.614 22:37:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.180 /dev/nbd0 00:06:18.180 22:37:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.180 22:37:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.180 1+0 records in 00:06:18.180 1+0 records out 00:06:18.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233111 s, 17.6 MB/s 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.180 22:37:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.180 22:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.180 22:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.180 22:37:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.438 /dev/nbd1 00:06:18.439 22:37:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.439 22:37:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.439 1+0 records in 00:06:18.439 1+0 records out 00:06:18.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231998 s, 17.7 MB/s 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.439 22:37:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.439 22:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.439 22:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.439 22:37:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.439 22:37:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.439 22:37:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.697 { 00:06:18.697 "nbd_device": "/dev/nbd0", 00:06:18.697 "bdev_name": "Malloc0" 00:06:18.697 }, 00:06:18.697 { 00:06:18.697 "nbd_device": "/dev/nbd1", 00:06:18.697 "bdev_name": "Malloc1" 00:06:18.697 } 00:06:18.697 ]' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.697 { 00:06:18.697 "nbd_device": "/dev/nbd0", 00:06:18.697 "bdev_name": "Malloc0" 00:06:18.697 }, 00:06:18.697 { 00:06:18.697 "nbd_device": "/dev/nbd1", 00:06:18.697 "bdev_name": "Malloc1" 00:06:18.697 } 00:06:18.697 ]' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.697 /dev/nbd1' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.697 /dev/nbd1' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.697 256+0 records in 00:06:18.697 256+0 records out 00:06:18.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439716 s, 238 MB/s 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.697 256+0 records in 00:06:18.697 256+0 records out 00:06:18.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202795 s, 51.7 MB/s 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.697 256+0 records in 00:06:18.697 256+0 records out 00:06:18.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220716 s, 47.5 MB/s 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.697 22:37:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.956 22:37:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.525 22:37:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.525 22:37:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.525 22:37:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.525 22:37:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.782 22:37:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.783 22:37:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.783 22:37:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.041 22:37:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.300 [2024-12-10 22:37:27.775186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.300 [2024-12-10 22:37:27.829462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.300 [2024-12-10 22:37:27.829462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.300 [2024-12-10 22:37:27.884049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.300 [2024-12-10 22:37:27.884117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.884 22:37:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4138937 /var/tmp/spdk-nbd.sock 00:06:22.884 22:37:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4138937 ']' 00:06:22.884 22:37:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.884 22:37:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.884 22:37:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.884 22:37:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.884 22:37:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.142 22:37:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.142 22:37:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.142 22:37:30 event.app_repeat -- event/event.sh@39 -- # killprocess 4138937 00:06:23.142 22:37:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 4138937 ']' 00:06:23.142 22:37:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 4138937 00:06:23.142 22:37:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:23.142 22:37:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.142 22:37:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4138937 00:06:23.402 22:37:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.402 22:37:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.402 22:37:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4138937' 00:06:23.402 killing process with pid 4138937 00:06:23.402 22:37:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 4138937 00:06:23.402 22:37:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 4138937 00:06:23.402 spdk_app_start is called in Round 0. 00:06:23.402 Shutdown signal received, stop current app iteration 00:06:23.402 Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 reinitialization... 00:06:23.402 spdk_app_start is called in Round 1. 00:06:23.402 Shutdown signal received, stop current app iteration 00:06:23.402 Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 reinitialization... 00:06:23.402 spdk_app_start is called in Round 2. 00:06:23.402 Shutdown signal received, stop current app iteration 00:06:23.402 Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 reinitialization... 00:06:23.402 spdk_app_start is called in Round 3. 00:06:23.402 Shutdown signal received, stop current app iteration 00:06:23.402 22:37:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:23.402 22:37:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:23.402 00:06:23.402 real 0m18.787s 00:06:23.402 user 0m41.590s 00:06:23.402 sys 0m3.210s 00:06:23.402 22:37:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.402 22:37:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.402 ************************************ 00:06:23.402 END TEST app_repeat 00:06:23.402 ************************************ 00:06:23.402 22:37:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:23.402 22:37:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:23.402 22:37:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.402 22:37:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.402 22:37:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.402 ************************************ 00:06:23.402 START TEST cpu_locks 00:06:23.402 ************************************ 00:06:23.402 22:37:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:23.662 * Looking for test storage... 00:06:23.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.662 22:37:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.662 --rc genhtml_branch_coverage=1 00:06:23.662 --rc genhtml_function_coverage=1 00:06:23.662 --rc genhtml_legend=1 00:06:23.662 --rc geninfo_all_blocks=1 00:06:23.662 --rc geninfo_unexecuted_blocks=1 00:06:23.662 00:06:23.662 ' 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.662 --rc genhtml_branch_coverage=1 00:06:23.662 --rc genhtml_function_coverage=1 00:06:23.662 --rc genhtml_legend=1 00:06:23.662 --rc geninfo_all_blocks=1 00:06:23.662 --rc geninfo_unexecuted_blocks=1 00:06:23.662 00:06:23.662 ' 00:06:23.662 22:37:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.662 --rc genhtml_branch_coverage=1 00:06:23.662 --rc genhtml_function_coverage=1 00:06:23.662 --rc genhtml_legend=1 00:06:23.663 --rc geninfo_all_blocks=1 00:06:23.663 --rc geninfo_unexecuted_blocks=1 00:06:23.663 00:06:23.663 ' 00:06:23.663 22:37:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.663 --rc genhtml_branch_coverage=1 00:06:23.663 --rc genhtml_function_coverage=1 00:06:23.663 --rc genhtml_legend=1 00:06:23.663 --rc geninfo_all_blocks=1 00:06:23.663 --rc geninfo_unexecuted_blocks=1 00:06:23.663 00:06:23.663 ' 00:06:23.663 22:37:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:23.663 22:37:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:23.663 22:37:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:23.663 22:37:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:23.663 22:37:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.663 22:37:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.663 22:37:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.663 ************************************ 00:06:23.663 START TEST default_locks 00:06:23.663 ************************************ 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4141395 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4141395 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4141395 ']' 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.663 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.663 [2024-12-10 22:37:31.340879] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:23.663 [2024-12-10 22:37:31.340963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141395 ] 00:06:23.923 [2024-12-10 22:37:31.406247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.923 [2024-12-10 22:37:31.462009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.181 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.181 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:24.181 22:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4141395 00:06:24.181 22:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4141395 00:06:24.181 22:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.440 lslocks: write error 00:06:24.440 22:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4141395 00:06:24.440 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 4141395 ']' 00:06:24.440 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 4141395 00:06:24.440 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:24.440 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.440 22:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4141395 00:06:24.440 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.440 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.440 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4141395' 00:06:24.440 killing process with pid 4141395 00:06:24.440 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 4141395 00:06:24.440 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 4141395 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4141395 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4141395 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 4141395 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4141395 ']' 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.010 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4141395) - No such process 00:06:25.011 ERROR: process (pid: 4141395) is no longer running 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.011 00:06:25.011 real 0m1.168s 00:06:25.011 user 0m1.144s 00:06:25.011 sys 0m0.480s 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.011 22:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.011 ************************************ 00:06:25.011 END TEST default_locks 00:06:25.011 ************************************ 00:06:25.011 22:37:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.011 22:37:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.011 22:37:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.011 22:37:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.011 ************************************ 00:06:25.011 START TEST default_locks_via_rpc 00:06:25.011 ************************************ 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4141594 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4141594 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4141594 ']' 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.011 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.011 [2024-12-10 22:37:32.560787] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:25.011 [2024-12-10 22:37:32.560894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141594 ] 00:06:25.011 [2024-12-10 22:37:32.626421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.011 [2024-12-10 22:37:32.686311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4141594 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4141594 00:06:25.269 22:37:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.529 22:37:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4141594 00:06:25.529 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 4141594 ']' 00:06:25.529 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 4141594 00:06:25.529 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:25.529 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.529 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4141594 00:06:25.788 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.788 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.788 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4141594' 00:06:25.788 killing process with pid 4141594 00:06:25.788 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 4141594 00:06:25.788 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 4141594 00:06:26.048 00:06:26.048 real 0m1.179s 00:06:26.048 user 0m1.131s 00:06:26.048 sys 0m0.506s 00:06:26.048 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.048 22:37:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.048 ************************************ 00:06:26.048 END TEST default_locks_via_rpc 00:06:26.048 ************************************ 00:06:26.048 22:37:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.048 22:37:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.048 22:37:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.048 22:37:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.048 ************************************ 00:06:26.048 START TEST non_locking_app_on_locked_coremask 00:06:26.048 ************************************ 00:06:26.048 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:26.048 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4141754 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4141754 /var/tmp/spdk.sock 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4141754 ']' 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.049 22:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.307 [2024-12-10 22:37:33.787951] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:26.307 [2024-12-10 22:37:33.788039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141754 ] 00:06:26.307 [2024-12-10 22:37:33.853890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.307 [2024-12-10 22:37:33.913723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4141768 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4141768 /var/tmp/spdk2.sock 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4141768 ']' 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.566 22:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.566 [2024-12-10 22:37:34.229229] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:26.566 [2024-12-10 22:37:34.229302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141768 ] 00:06:26.825 [2024-12-10 22:37:34.328187] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.825 [2024-12-10 22:37:34.328213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.825 [2024-12-10 22:37:34.435488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.761 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.761 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.761 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4141754 00:06:27.761 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4141754 00:06:27.761 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.020 lslocks: write error 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4141754 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4141754 ']' 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4141754 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4141754 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4141754' 00:06:28.020 killing process with pid 4141754 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4141754 00:06:28.020 22:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4141754 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4141768 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4141768 ']' 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4141768 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4141768 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4141768' 00:06:28.957 killing process with pid 4141768 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4141768 00:06:28.957 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4141768 00:06:29.216 00:06:29.216 real 0m3.196s 00:06:29.216 user 0m3.441s 00:06:29.216 sys 0m0.991s 00:06:29.216 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.216 22:37:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.216 ************************************ 00:06:29.216 END TEST non_locking_app_on_locked_coremask 00:06:29.216 ************************************ 00:06:29.475 22:37:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:29.475 22:37:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.475 22:37:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.475 22:37:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.475 ************************************ 00:06:29.475 START TEST locking_app_on_unlocked_coremask 00:06:29.475 ************************************ 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4142188 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4142188 /var/tmp/spdk.sock 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4142188 ']' 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.475 22:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.475 [2024-12-10 22:37:37.033860] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:29.475 [2024-12-10 22:37:37.033950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142188 ] 00:06:29.475 [2024-12-10 22:37:37.100405] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.475 [2024-12-10 22:37:37.100446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.475 [2024-12-10 22:37:37.160303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4142203 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4142203 /var/tmp/spdk2.sock 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4142203 ']' 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.735 22:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.995 [2024-12-10 22:37:37.473434] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:29.995 [2024-12-10 22:37:37.473515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142203 ] 00:06:29.995 [2024-12-10 22:37:37.575554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.995 [2024-12-10 22:37:37.695024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.930 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.930 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.930 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4142203 00:06:30.930 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4142203 00:06:30.930 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.190 lslocks: write error 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4142188 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4142188 ']' 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4142188 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142188 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142188' 00:06:31.190 killing process with pid 4142188 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4142188 00:06:31.190 22:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4142188 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4142203 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4142203 ']' 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4142203 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142203 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142203' 00:06:32.128 killing process with pid 4142203 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4142203 00:06:32.128 22:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4142203 00:06:32.387 00:06:32.387 real 0m3.137s 00:06:32.387 user 0m3.371s 00:06:32.387 sys 0m0.989s 00:06:32.387 22:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.387 22:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.387 ************************************ 00:06:32.387 END TEST locking_app_on_unlocked_coremask 00:06:32.387 ************************************ 00:06:32.646 22:37:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:32.646 22:37:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.646 22:37:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.646 22:37:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 START TEST locking_app_on_locked_coremask 00:06:32.646 ************************************ 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4142508 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4142508 /var/tmp/spdk.sock 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4142508 ']' 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.646 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 [2024-12-10 22:37:40.227175] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:32.646 [2024-12-10 22:37:40.227259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142508 ] 00:06:32.646 [2024-12-10 22:37:40.297274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.646 [2024-12-10 22:37:40.358916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4142633 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4142633 /var/tmp/spdk2.sock 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4142633 /var/tmp/spdk2.sock 00:06:32.906 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4142633 /var/tmp/spdk2.sock 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4142633 ']' 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.164 22:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.164 [2024-12-10 22:37:40.688656] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:33.164 [2024-12-10 22:37:40.688739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142633 ] 00:06:33.164 [2024-12-10 22:37:40.791170] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4142508 has claimed it. 00:06:33.164 [2024-12-10 22:37:40.791222] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4142633) - No such process 00:06:33.733 ERROR: process (pid: 4142633) is no longer running 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4142508 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4142508 00:06:33.733 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.992 lslocks: write error 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4142508 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4142508 ']' 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4142508 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142508 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142508' 00:06:33.992 killing process with pid 4142508 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4142508 00:06:33.992 22:37:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4142508 00:06:34.562 00:06:34.562 real 0m1.946s 00:06:34.562 user 0m2.122s 00:06:34.562 sys 0m0.650s 00:06:34.562 22:37:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.562 22:37:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.562 ************************************ 00:06:34.562 END TEST locking_app_on_locked_coremask 00:06:34.562 ************************************ 00:06:34.562 22:37:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:34.562 22:37:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.562 22:37:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.562 22:37:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.562 ************************************ 00:06:34.562 START TEST locking_overlapped_coremask 00:06:34.562 ************************************ 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4142806 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4142806 /var/tmp/spdk.sock 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4142806 ']' 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.562 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.562 [2024-12-10 22:37:42.220319] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:34.562 [2024-12-10 22:37:42.220409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142806 ] 00:06:34.562 [2024-12-10 22:37:42.286845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.820 [2024-12-10 22:37:42.349977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.820 [2024-12-10 22:37:42.350009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.820 [2024-12-10 22:37:42.350012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4142930 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4142930 /var/tmp/spdk2.sock 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4142930 /var/tmp/spdk2.sock 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4142930 /var/tmp/spdk2.sock 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4142930 ']' 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.078 22:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.078 [2024-12-10 22:37:42.684830] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:35.078 [2024-12-10 22:37:42.684943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142930 ] 00:06:35.078 [2024-12-10 22:37:42.789996] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4142806 has claimed it. 00:06:35.078 [2024-12-10 22:37:42.790054] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4142930) - No such process 00:06:36.017 ERROR: process (pid: 4142930) is no longer running 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4142806 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 4142806 ']' 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 4142806 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142806 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142806' 00:06:36.017 killing process with pid 4142806 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 4142806 00:06:36.017 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 4142806 00:06:36.277 00:06:36.277 real 0m1.680s 00:06:36.277 user 0m4.715s 00:06:36.277 sys 0m0.460s 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.277 ************************************ 00:06:36.277 END TEST locking_overlapped_coremask 00:06:36.277 ************************************ 00:06:36.277 22:37:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.277 22:37:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.277 22:37:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.277 22:37:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.277 ************************************ 00:06:36.277 START TEST locking_overlapped_coremask_via_rpc 00:06:36.277 ************************************ 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4143095 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4143095 /var/tmp/spdk.sock 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4143095 ']' 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.277 22:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.277 [2024-12-10 22:37:43.955523] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:36.277 [2024-12-10 22:37:43.955634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143095 ] 00:06:36.537 [2024-12-10 22:37:44.022379] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.537 [2024-12-10 22:37:44.022416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.537 [2024-12-10 22:37:44.082503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.537 [2024-12-10 22:37:44.082572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.537 [2024-12-10 22:37:44.082577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4143111 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4143111 /var/tmp/spdk2.sock 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4143111 ']' 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.796 22:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.796 [2024-12-10 22:37:44.411044] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:36.796 [2024-12-10 22:37:44.411143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143111 ] 00:06:36.796 [2024-12-10 22:37:44.516723] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.796 [2024-12-10 22:37:44.516760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.078 [2024-12-10 22:37:44.638306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.078 [2024-12-10 22:37:44.641642] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.078 [2024-12-10 22:37:44.641645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.020 [2024-12-10 22:37:45.412651] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4143095 has claimed it. 00:06:38.020 request: 00:06:38.020 { 00:06:38.020 "method": "framework_enable_cpumask_locks", 00:06:38.020 "req_id": 1 00:06:38.020 } 00:06:38.020 Got JSON-RPC error response 00:06:38.020 response: 00:06:38.020 { 00:06:38.020 "code": -32603, 00:06:38.020 "message": "Failed to claim CPU core: 2" 00:06:38.020 } 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4143095 /var/tmp/spdk.sock 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4143095 ']' 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4143111 /var/tmp/spdk2.sock 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4143111 ']' 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.020 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.279 00:06:38.279 real 0m2.063s 00:06:38.279 user 0m1.146s 00:06:38.279 sys 0m0.175s 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.279 22:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.279 ************************************ 00:06:38.279 END TEST locking_overlapped_coremask_via_rpc 00:06:38.279 ************************************ 00:06:38.279 22:37:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:38.279 22:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4143095 ]] 00:06:38.279 22:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4143095 00:06:38.279 22:37:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4143095 ']' 00:06:38.279 22:37:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4143095 00:06:38.279 22:37:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:38.279 22:37:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.279 22:37:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143095 00:06:38.537 22:37:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.537 22:37:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.537 22:37:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143095' 00:06:38.537 killing process with pid 4143095 00:06:38.537 22:37:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4143095 00:06:38.537 22:37:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4143095 00:06:38.796 22:37:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4143111 ]] 00:06:38.796 22:37:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4143111 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4143111 ']' 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4143111 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143111 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143111' 00:06:38.796 killing process with pid 4143111 00:06:38.796 22:37:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4143111 00:06:38.797 22:37:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4143111 00:06:39.365 22:37:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.365 22:37:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.365 22:37:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4143095 ]] 00:06:39.365 22:37:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4143095 00:06:39.365 22:37:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4143095 ']' 00:06:39.365 22:37:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4143095 00:06:39.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4143095) - No such process 00:06:39.365 22:37:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4143095 is not found' 00:06:39.365 Process with pid 4143095 is not found 00:06:39.365 22:37:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4143111 ]] 00:06:39.365 22:37:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4143111 00:06:39.365 22:37:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4143111 ']' 00:06:39.365 22:37:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4143111 00:06:39.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4143111) - No such process 00:06:39.366 22:37:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4143111 is not found' 00:06:39.366 Process with pid 4143111 is not found 00:06:39.366 22:37:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.366 00:06:39.366 real 0m15.799s 00:06:39.366 user 0m28.779s 00:06:39.366 sys 0m5.204s 00:06:39.366 22:37:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.366 22:37:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.366 ************************************ 00:06:39.366 END TEST cpu_locks 00:06:39.366 ************************************ 00:06:39.366 00:06:39.366 real 0m40.488s 00:06:39.366 user 1m19.459s 00:06:39.366 sys 0m9.230s 00:06:39.366 22:37:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.366 22:37:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.366 ************************************ 00:06:39.366 END TEST event 00:06:39.366 ************************************ 00:06:39.366 22:37:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:39.366 22:37:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.366 22:37:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.366 22:37:46 -- common/autotest_common.sh@10 -- # set +x 00:06:39.366 ************************************ 00:06:39.366 START TEST thread 00:06:39.366 ************************************ 00:06:39.366 22:37:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:39.366 * Looking for test storage... 00:06:39.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:39.366 22:37:47 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.366 22:37:47 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.366 22:37:47 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.625 22:37:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.625 22:37:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.625 22:37:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.625 22:37:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.625 22:37:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.625 22:37:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.625 22:37:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.625 22:37:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.625 22:37:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.625 22:37:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.625 22:37:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.625 22:37:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:39.625 22:37:47 thread -- scripts/common.sh@345 -- # : 1 00:06:39.625 22:37:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.625 22:37:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.625 22:37:47 thread -- scripts/common.sh@365 -- # decimal 1 00:06:39.625 22:37:47 thread -- scripts/common.sh@353 -- # local d=1 00:06:39.625 22:37:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.625 22:37:47 thread -- scripts/common.sh@355 -- # echo 1 00:06:39.625 22:37:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.625 22:37:47 thread -- scripts/common.sh@366 -- # decimal 2 00:06:39.625 22:37:47 thread -- scripts/common.sh@353 -- # local d=2 00:06:39.625 22:37:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.625 22:37:47 thread -- scripts/common.sh@355 -- # echo 2 00:06:39.625 22:37:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.625 22:37:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.625 22:37:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.625 22:37:47 thread -- scripts/common.sh@368 -- # return 0 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.625 --rc genhtml_branch_coverage=1 00:06:39.625 --rc genhtml_function_coverage=1 00:06:39.625 --rc genhtml_legend=1 00:06:39.625 --rc geninfo_all_blocks=1 00:06:39.625 --rc geninfo_unexecuted_blocks=1 00:06:39.625 00:06:39.625 ' 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.625 --rc genhtml_branch_coverage=1 00:06:39.625 --rc genhtml_function_coverage=1 00:06:39.625 --rc genhtml_legend=1 00:06:39.625 --rc geninfo_all_blocks=1 00:06:39.625 --rc geninfo_unexecuted_blocks=1 00:06:39.625 00:06:39.625 ' 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.625 --rc genhtml_branch_coverage=1 00:06:39.625 --rc genhtml_function_coverage=1 00:06:39.625 --rc genhtml_legend=1 00:06:39.625 --rc geninfo_all_blocks=1 00:06:39.625 --rc geninfo_unexecuted_blocks=1 00:06:39.625 00:06:39.625 ' 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.625 --rc genhtml_branch_coverage=1 00:06:39.625 --rc genhtml_function_coverage=1 00:06:39.625 --rc genhtml_legend=1 00:06:39.625 --rc geninfo_all_blocks=1 00:06:39.625 --rc geninfo_unexecuted_blocks=1 00:06:39.625 00:06:39.625 ' 00:06:39.625 22:37:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.625 22:37:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.625 ************************************ 00:06:39.625 START TEST thread_poller_perf 00:06:39.625 ************************************ 00:06:39.625 22:37:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.625 [2024-12-10 22:37:47.168762] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:39.625 [2024-12-10 22:37:47.168826] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143601 ] 00:06:39.625 [2024-12-10 22:37:47.237297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.625 [2024-12-10 22:37:47.294934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.625 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.008 [2024-12-10T21:37:48.740Z] ====================================== 00:06:41.008 [2024-12-10T21:37:48.740Z] busy:2709501711 (cyc) 00:06:41.008 [2024-12-10T21:37:48.740Z] total_run_count: 369000 00:06:41.008 [2024-12-10T21:37:48.740Z] tsc_hz: 2700000000 (cyc) 00:06:41.008 [2024-12-10T21:37:48.740Z] ====================================== 00:06:41.008 [2024-12-10T21:37:48.740Z] poller_cost: 7342 (cyc), 2719 (nsec) 00:06:41.008 00:06:41.008 real 0m1.209s 00:06:41.008 user 0m1.133s 00:06:41.008 sys 0m0.071s 00:06:41.008 22:37:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.008 22:37:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.008 ************************************ 00:06:41.008 END TEST thread_poller_perf 00:06:41.008 ************************************ 00:06:41.008 22:37:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.008 22:37:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:41.008 22:37:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.008 22:37:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.008 ************************************ 00:06:41.008 START TEST thread_poller_perf 00:06:41.008 ************************************ 00:06:41.008 22:37:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.008 [2024-12-10 22:37:48.424941] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:41.008 [2024-12-10 22:37:48.425008] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143761 ] 00:06:41.008 [2024-12-10 22:37:48.491840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.008 [2024-12-10 22:37:48.545592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.008 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.950 [2024-12-10T21:37:49.682Z] ====================================== 00:06:41.950 [2024-12-10T21:37:49.682Z] busy:2702000616 (cyc) 00:06:41.950 [2024-12-10T21:37:49.682Z] total_run_count: 4491000 00:06:41.950 [2024-12-10T21:37:49.682Z] tsc_hz: 2700000000 (cyc) 00:06:41.950 [2024-12-10T21:37:49.682Z] ====================================== 00:06:41.950 [2024-12-10T21:37:49.682Z] poller_cost: 601 (cyc), 222 (nsec) 00:06:41.950 00:06:41.950 real 0m1.198s 00:06:41.950 user 0m1.130s 00:06:41.950 sys 0m0.063s 00:06:41.950 22:37:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.950 22:37:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.950 ************************************ 00:06:41.950 END TEST thread_poller_perf 00:06:41.950 ************************************ 00:06:41.950 22:37:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:41.950 00:06:41.950 real 0m2.637s 00:06:41.950 user 0m2.401s 00:06:41.950 sys 0m0.240s 00:06:41.950 22:37:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.950 22:37:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.950 ************************************ 00:06:41.950 END TEST thread 00:06:41.950 ************************************ 00:06:41.950 22:37:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:41.950 22:37:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.950 22:37:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.950 22:37:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.950 22:37:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.208 ************************************ 00:06:42.208 START TEST app_cmdline 00:06:42.208 ************************************ 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.208 * Looking for test storage... 00:06:42.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.208 22:37:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.208 --rc genhtml_branch_coverage=1 00:06:42.208 --rc genhtml_function_coverage=1 00:06:42.208 --rc genhtml_legend=1 00:06:42.208 --rc geninfo_all_blocks=1 00:06:42.208 --rc geninfo_unexecuted_blocks=1 00:06:42.208 00:06:42.208 ' 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.208 --rc genhtml_branch_coverage=1 00:06:42.208 --rc genhtml_function_coverage=1 00:06:42.208 --rc genhtml_legend=1 00:06:42.208 --rc geninfo_all_blocks=1 00:06:42.208 --rc geninfo_unexecuted_blocks=1 00:06:42.208 00:06:42.208 ' 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.208 --rc genhtml_branch_coverage=1 00:06:42.208 --rc genhtml_function_coverage=1 00:06:42.208 --rc genhtml_legend=1 00:06:42.208 --rc geninfo_all_blocks=1 00:06:42.208 --rc geninfo_unexecuted_blocks=1 00:06:42.208 00:06:42.208 ' 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.208 --rc genhtml_branch_coverage=1 00:06:42.208 --rc genhtml_function_coverage=1 00:06:42.208 --rc genhtml_legend=1 00:06:42.208 --rc geninfo_all_blocks=1 00:06:42.208 --rc geninfo_unexecuted_blocks=1 00:06:42.208 00:06:42.208 ' 00:06:42.208 22:37:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.208 22:37:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4143968 00:06:42.208 22:37:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.208 22:37:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4143968 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 4143968 ']' 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.208 22:37:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.208 [2024-12-10 22:37:49.890252] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:42.208 [2024-12-10 22:37:49.890339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143968 ] 00:06:42.468 [2024-12-10 22:37:49.956268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.468 [2024-12-10 22:37:50.014159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.726 22:37:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.726 22:37:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:42.726 22:37:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:42.985 { 00:06:42.985 "version": "SPDK v25.01-pre git sha1 2104eacf0", 00:06:42.985 "fields": { 00:06:42.985 "major": 25, 00:06:42.985 "minor": 1, 00:06:42.985 "patch": 0, 00:06:42.985 "suffix": "-pre", 00:06:42.985 "commit": "2104eacf0" 00:06:42.985 } 00:06:42.985 } 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:42.985 22:37:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:42.985 22:37:50 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.245 request: 00:06:43.245 { 00:06:43.245 "method": "env_dpdk_get_mem_stats", 00:06:43.245 "req_id": 1 00:06:43.245 } 00:06:43.245 Got JSON-RPC error response 00:06:43.245 response: 00:06:43.245 { 00:06:43.245 "code": -32601, 00:06:43.245 "message": "Method not found" 00:06:43.245 } 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.245 22:37:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4143968 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 4143968 ']' 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 4143968 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143968 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143968' 00:06:43.245 killing process with pid 4143968 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 4143968 00:06:43.245 22:37:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 4143968 00:06:43.814 00:06:43.814 real 0m1.669s 00:06:43.814 user 0m2.057s 00:06:43.814 sys 0m0.500s 00:06:43.814 22:37:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.814 22:37:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.814 ************************************ 00:06:43.814 END TEST app_cmdline 00:06:43.814 ************************************ 00:06:43.814 22:37:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.814 22:37:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.814 22:37:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.814 22:37:51 -- common/autotest_common.sh@10 -- # set +x 00:06:43.814 ************************************ 00:06:43.814 START TEST version 00:06:43.814 ************************************ 00:06:43.814 22:37:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.814 * Looking for test storage... 00:06:43.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:43.814 22:37:51 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.814 22:37:51 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.814 22:37:51 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.814 22:37:51 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.814 22:37:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.814 22:37:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.814 22:37:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.814 22:37:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.814 22:37:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.814 22:37:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.814 22:37:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.814 22:37:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.814 22:37:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.814 22:37:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.814 22:37:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.814 22:37:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:43.814 22:37:51 version -- scripts/common.sh@345 -- # : 1 00:06:43.814 22:37:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.814 22:37:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.072 22:37:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:44.072 22:37:51 version -- scripts/common.sh@353 -- # local d=1 00:06:44.072 22:37:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.072 22:37:51 version -- scripts/common.sh@355 -- # echo 1 00:06:44.072 22:37:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.072 22:37:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:44.073 22:37:51 version -- scripts/common.sh@353 -- # local d=2 00:06:44.073 22:37:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.073 22:37:51 version -- scripts/common.sh@355 -- # echo 2 00:06:44.073 22:37:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.073 22:37:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.073 22:37:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.073 22:37:51 version -- scripts/common.sh@368 -- # return 0 00:06:44.073 22:37:51 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.073 22:37:51 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.073 --rc genhtml_branch_coverage=1 00:06:44.073 --rc genhtml_function_coverage=1 00:06:44.073 --rc genhtml_legend=1 00:06:44.073 --rc geninfo_all_blocks=1 00:06:44.073 --rc geninfo_unexecuted_blocks=1 00:06:44.073 00:06:44.073 ' 00:06:44.073 22:37:51 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.073 --rc genhtml_branch_coverage=1 00:06:44.073 --rc genhtml_function_coverage=1 00:06:44.073 --rc genhtml_legend=1 00:06:44.073 --rc geninfo_all_blocks=1 00:06:44.073 --rc geninfo_unexecuted_blocks=1 00:06:44.073 00:06:44.073 ' 00:06:44.073 22:37:51 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.073 --rc genhtml_branch_coverage=1 00:06:44.073 --rc genhtml_function_coverage=1 00:06:44.073 --rc genhtml_legend=1 00:06:44.073 --rc geninfo_all_blocks=1 00:06:44.073 --rc geninfo_unexecuted_blocks=1 00:06:44.073 00:06:44.073 ' 00:06:44.073 22:37:51 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.073 --rc genhtml_branch_coverage=1 00:06:44.073 --rc genhtml_function_coverage=1 00:06:44.073 --rc genhtml_legend=1 00:06:44.073 --rc geninfo_all_blocks=1 00:06:44.073 --rc geninfo_unexecuted_blocks=1 00:06:44.073 00:06:44.073 ' 00:06:44.073 22:37:51 version -- app/version.sh@17 -- # get_header_version major 00:06:44.073 22:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.073 22:37:51 version -- app/version.sh@17 -- # major=25 00:06:44.073 22:37:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:44.073 22:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.073 22:37:51 version -- app/version.sh@18 -- # minor=1 00:06:44.073 22:37:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:44.073 22:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.073 22:37:51 version -- app/version.sh@19 -- # patch=0 00:06:44.073 22:37:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:44.073 22:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.073 22:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.073 22:37:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:44.073 22:37:51 version -- app/version.sh@22 -- # version=25.1 00:06:44.073 22:37:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.073 22:37:51 version -- app/version.sh@28 -- # version=25.1rc0 00:06:44.073 22:37:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:44.073 22:37:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.073 22:37:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:44.073 22:37:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:44.073 00:06:44.073 real 0m0.200s 00:06:44.073 user 0m0.136s 00:06:44.073 sys 0m0.089s 00:06:44.073 22:37:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.073 22:37:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:44.073 ************************************ 00:06:44.073 END TEST version 00:06:44.073 ************************************ 00:06:44.073 22:37:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:44.073 22:37:51 -- spdk/autotest.sh@194 -- # uname -s 00:06:44.073 22:37:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:44.073 22:37:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.073 22:37:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.073 22:37:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:44.073 22:37:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.073 22:37:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.073 22:37:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:44.073 22:37:51 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:44.073 22:37:51 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.073 22:37:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.073 22:37:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.073 22:37:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.073 ************************************ 00:06:44.073 START TEST nvmf_tcp 00:06:44.073 ************************************ 00:06:44.073 22:37:51 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.073 * Looking for test storage... 00:06:44.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.073 22:37:51 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.073 22:37:51 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.073 22:37:51 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.332 22:37:51 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.332 22:37:51 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:44.332 22:37:51 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.332 22:37:51 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.332 --rc genhtml_branch_coverage=1 00:06:44.332 --rc genhtml_function_coverage=1 00:06:44.332 --rc genhtml_legend=1 00:06:44.332 --rc geninfo_all_blocks=1 00:06:44.332 --rc geninfo_unexecuted_blocks=1 00:06:44.332 00:06:44.332 ' 00:06:44.332 22:37:51 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.332 --rc genhtml_branch_coverage=1 00:06:44.332 --rc genhtml_function_coverage=1 00:06:44.332 --rc genhtml_legend=1 00:06:44.332 --rc geninfo_all_blocks=1 00:06:44.332 --rc geninfo_unexecuted_blocks=1 00:06:44.332 00:06:44.332 ' 00:06:44.332 22:37:51 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.332 --rc genhtml_branch_coverage=1 00:06:44.332 --rc genhtml_function_coverage=1 00:06:44.332 --rc genhtml_legend=1 00:06:44.332 --rc geninfo_all_blocks=1 00:06:44.332 --rc geninfo_unexecuted_blocks=1 00:06:44.332 00:06:44.332 ' 00:06:44.332 22:37:51 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.332 --rc genhtml_branch_coverage=1 00:06:44.332 --rc genhtml_function_coverage=1 00:06:44.332 --rc genhtml_legend=1 00:06:44.333 --rc geninfo_all_blocks=1 00:06:44.333 --rc geninfo_unexecuted_blocks=1 00:06:44.333 00:06:44.333 ' 00:06:44.333 22:37:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.333 22:37:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.333 22:37:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:44.333 22:37:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.333 22:37:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.333 22:37:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.333 ************************************ 00:06:44.333 START TEST nvmf_target_core 00:06:44.333 ************************************ 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:44.333 * Looking for test storage... 00:06:44.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.333 --rc genhtml_branch_coverage=1 00:06:44.333 --rc genhtml_function_coverage=1 00:06:44.333 --rc genhtml_legend=1 00:06:44.333 --rc geninfo_all_blocks=1 00:06:44.333 --rc geninfo_unexecuted_blocks=1 00:06:44.333 00:06:44.333 ' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.333 --rc genhtml_branch_coverage=1 00:06:44.333 --rc genhtml_function_coverage=1 00:06:44.333 --rc genhtml_legend=1 00:06:44.333 --rc geninfo_all_blocks=1 00:06:44.333 --rc geninfo_unexecuted_blocks=1 00:06:44.333 00:06:44.333 ' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.333 --rc genhtml_branch_coverage=1 00:06:44.333 --rc genhtml_function_coverage=1 00:06:44.333 --rc genhtml_legend=1 00:06:44.333 --rc geninfo_all_blocks=1 00:06:44.333 --rc geninfo_unexecuted_blocks=1 00:06:44.333 00:06:44.333 ' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.333 --rc genhtml_branch_coverage=1 00:06:44.333 --rc genhtml_function_coverage=1 00:06:44.333 --rc genhtml_legend=1 00:06:44.333 --rc geninfo_all_blocks=1 00:06:44.333 --rc geninfo_unexecuted_blocks=1 00:06:44.333 00:06:44.333 ' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.333 22:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.333 ************************************ 00:06:44.333 START TEST nvmf_abort 00:06:44.333 ************************************ 00:06:44.333 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:44.594 * Looking for test storage... 00:06:44.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.594 --rc genhtml_branch_coverage=1 00:06:44.594 --rc genhtml_function_coverage=1 00:06:44.594 --rc genhtml_legend=1 00:06:44.594 --rc geninfo_all_blocks=1 00:06:44.594 --rc geninfo_unexecuted_blocks=1 00:06:44.594 00:06:44.594 ' 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.594 --rc genhtml_branch_coverage=1 00:06:44.594 --rc genhtml_function_coverage=1 00:06:44.594 --rc genhtml_legend=1 00:06:44.594 --rc geninfo_all_blocks=1 00:06:44.594 --rc geninfo_unexecuted_blocks=1 00:06:44.594 00:06:44.594 ' 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.594 --rc genhtml_branch_coverage=1 00:06:44.594 --rc genhtml_function_coverage=1 00:06:44.594 --rc genhtml_legend=1 00:06:44.594 --rc geninfo_all_blocks=1 00:06:44.594 --rc geninfo_unexecuted_blocks=1 00:06:44.594 00:06:44.594 ' 00:06:44.594 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.594 --rc genhtml_branch_coverage=1 00:06:44.594 --rc genhtml_function_coverage=1 00:06:44.594 --rc genhtml_legend=1 00:06:44.594 --rc geninfo_all_blocks=1 00:06:44.594 --rc geninfo_unexecuted_blocks=1 00:06:44.594 00:06:44.594 ' 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.595 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:47.132 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:47.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:06:47.133 00:06:47.133 --- 10.0.0.2 ping statistics --- 00:06:47.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.133 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:06:47.133 00:06:47.133 --- 10.0.0.1 ping statistics --- 00:06:47.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.133 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4146055 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4146055 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4146055 ']' 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.133 [2024-12-10 22:37:54.480935] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:47.133 [2024-12-10 22:37:54.481003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.133 [2024-12-10 22:37:54.550195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.133 [2024-12-10 22:37:54.604974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.133 [2024-12-10 22:37:54.605031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.133 [2024-12-10 22:37:54.605059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.133 [2024-12-10 22:37:54.605071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.133 [2024-12-10 22:37:54.605080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.133 [2024-12-10 22:37:54.606634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.133 [2024-12-10 22:37:54.606696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.133 [2024-12-10 22:37:54.606691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.133 [2024-12-10 22:37:54.752899] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.133 Malloc0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.133 Delay0 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.133 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.134 [2024-12-10 22:37:54.826043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.134 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:47.393 [2024-12-10 22:37:54.941334] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:49.303 Initializing NVMe Controllers 00:06:49.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:49.303 controller IO queue size 128 less than required 00:06:49.303 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:49.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:49.303 Initialization complete. Launching workers. 00:06:49.303 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27986 00:06:49.303 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28047, failed to submit 62 00:06:49.303 success 27990, unsuccessful 57, failed 0 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:49.303 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:49.561 rmmod nvme_tcp 00:06:49.562 rmmod nvme_fabrics 00:06:49.562 rmmod nvme_keyring 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4146055 ']' 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4146055 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4146055 ']' 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4146055 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4146055 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4146055' 00:06:49.562 killing process with pid 4146055 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4146055 00:06:49.562 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4146055 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.822 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.732 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.732 00:06:51.732 real 0m7.394s 00:06:51.732 user 0m10.724s 00:06:51.732 sys 0m2.528s 00:06:51.732 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.732 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.732 ************************************ 00:06:51.732 END TEST nvmf_abort 00:06:51.732 ************************************ 00:06:51.732 22:37:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:51.732 22:37:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.732 22:37:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.732 22:37:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.991 ************************************ 00:06:51.991 START TEST nvmf_ns_hotplug_stress 00:06:51.991 ************************************ 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:51.991 * Looking for test storage... 00:06:51.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.991 --rc genhtml_branch_coverage=1 00:06:51.991 --rc genhtml_function_coverage=1 00:06:51.991 --rc genhtml_legend=1 00:06:51.991 --rc geninfo_all_blocks=1 00:06:51.991 --rc geninfo_unexecuted_blocks=1 00:06:51.991 00:06:51.991 ' 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.991 --rc genhtml_branch_coverage=1 00:06:51.991 --rc genhtml_function_coverage=1 00:06:51.991 --rc genhtml_legend=1 00:06:51.991 --rc geninfo_all_blocks=1 00:06:51.991 --rc geninfo_unexecuted_blocks=1 00:06:51.991 00:06:51.991 ' 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.991 --rc genhtml_branch_coverage=1 00:06:51.991 --rc genhtml_function_coverage=1 00:06:51.991 --rc genhtml_legend=1 00:06:51.991 --rc geninfo_all_blocks=1 00:06:51.991 --rc geninfo_unexecuted_blocks=1 00:06:51.991 00:06:51.991 ' 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.991 --rc genhtml_branch_coverage=1 00:06:51.991 --rc genhtml_function_coverage=1 00:06:51.991 --rc genhtml_legend=1 00:06:51.991 --rc geninfo_all_blocks=1 00:06:51.991 --rc geninfo_unexecuted_blocks=1 00:06:51.991 00:06:51.991 ' 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.991 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:51.992 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:54.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:54.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:54.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:54.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:54.536 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:54.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:06:54.537 00:06:54.537 --- 10.0.0.2 ping statistics --- 00:06:54.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.537 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:06:54.537 00:06:54.537 --- 10.0.0.1 ping statistics --- 00:06:54.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.537 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4148410 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4148410 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4148410 ']' 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.537 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.537 [2024-12-10 22:38:02.004913] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:54.537 [2024-12-10 22:38:02.004996] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.537 [2024-12-10 22:38:02.078424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.537 [2024-12-10 22:38:02.138522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.537 [2024-12-10 22:38:02.138608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.537 [2024-12-10 22:38:02.138637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.537 [2024-12-10 22:38:02.138648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.537 [2024-12-10 22:38:02.138658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.537 [2024-12-10 22:38:02.140388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.537 [2024-12-10 22:38:02.140452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.537 [2024-12-10 22:38:02.140455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:54.832 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:54.832 [2024-12-10 22:38:02.539800] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.091 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.349 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.606 [2024-12-10 22:38:03.090539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.606 22:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.863 22:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:56.120 Malloc0 00:06:56.120 22:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:56.379 Delay0 00:06:56.379 22:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.635 22:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:56.892 NULL1 00:06:56.892 22:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:57.151 22:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4148720 00:06:57.151 22:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:57.152 22:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:06:57.152 22:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.524 Read completed with error (sct=0, sc=11) 00:06:58.524 22:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.524 22:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:58.524 22:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:58.781 true 00:06:58.781 22:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:06:58.782 22:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.715 22:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.715 22:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:59.715 22:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:59.973 true 00:07:00.230 22:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:00.230 22:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.488 22:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.746 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:00.746 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:01.004 true 00:07:01.004 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:01.004 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.261 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.519 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:01.519 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:01.777 true 00:07:01.777 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:01.778 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.714 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.971 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:02.971 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:03.228 true 00:07:03.228 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:03.228 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.486 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.742 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:03.743 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:03.999 true 00:07:03.999 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:03.999 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.257 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.822 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:04.822 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:04.822 true 00:07:04.822 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:04.822 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.200 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.200 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:06.200 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:06.458 true 00:07:06.458 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:06.458 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.715 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.972 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:06.972 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:07.230 true 00:07:07.230 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:07.230 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.487 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.744 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:07.744 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:08.002 true 00:07:08.002 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:08.002 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.938 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.196 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:09.196 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:09.456 true 00:07:09.456 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:09.457 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.714 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.281 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:10.281 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:10.281 true 00:07:10.281 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:10.282 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.540 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.798 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:10.798 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:11.056 true 00:07:11.315 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:11.315 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.251 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.508 22:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:12.508 22:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:12.766 true 00:07:12.766 22:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:12.766 22:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.024 22:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.283 22:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:13.283 22:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:13.541 true 00:07:13.541 22:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:13.541 22:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.478 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.736 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:14.736 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:14.995 true 00:07:14.995 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:14.995 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.252 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.510 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:15.510 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:15.767 true 00:07:15.767 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:15.768 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.706 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.964 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:16.964 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:17.221 true 00:07:17.221 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:17.221 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.479 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.737 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:17.737 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:17.995 true 00:07:17.995 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:17.995 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.932 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.932 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:18.932 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:19.189 true 00:07:19.189 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:19.189 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.447 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.705 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:19.705 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:19.964 true 00:07:19.964 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:19.964 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.227 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.793 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:20.793 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:20.793 true 00:07:20.793 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:20.793 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.729 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.986 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:21.986 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:22.244 true 00:07:22.244 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:22.244 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.812 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.812 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:22.812 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:23.073 true 00:07:23.073 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:23.073 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.009 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.297 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:24.297 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:24.554 true 00:07:24.554 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:24.554 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.811 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.069 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:25.069 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:25.327 true 00:07:25.327 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:25.327 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.264 22:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.523 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:26.523 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:26.781 true 00:07:26.781 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:26.781 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.039 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.297 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:27.297 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:27.554 true 00:07:27.554 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:27.554 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.488 Initializing NVMe Controllers 00:07:28.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.488 Controller IO queue size 128, less than required. 00:07:28.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.488 Controller IO queue size 128, less than required. 00:07:28.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:28.488 Initialization complete. Launching workers. 00:07:28.488 ======================================================== 00:07:28.488 Latency(us) 00:07:28.488 Device Information : IOPS MiB/s Average min max 00:07:28.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 845.20 0.41 73782.10 2800.47 1077471.38 00:07:28.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9275.37 4.53 13799.72 2917.99 459542.20 00:07:28.488 ======================================================== 00:07:28.488 Total : 10120.57 4.94 18809.04 2800.47 1077471.38 00:07:28.488 00:07:28.488 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.745 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:28.745 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:29.002 true 00:07:29.002 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4148720 00:07:29.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4148720) - No such process 00:07:29.002 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4148720 00:07:29.002 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.260 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.518 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:29.518 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:29.518 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:29.518 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.518 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:29.776 null0 00:07:29.776 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.776 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.776 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:30.033 null1 00:07:30.033 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.033 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.033 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:30.291 null2 00:07:30.291 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.291 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.291 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:30.549 null3 00:07:30.549 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.549 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.549 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:30.807 null4 00:07:30.807 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.807 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.807 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:31.065 null5 00:07:31.065 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.065 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.065 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:31.323 null6 00:07:31.323 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.323 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.323 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:31.584 null7 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:31.584 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4152937 4152938 4152940 4152942 4152944 4152946 4152948 4152950 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.585 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.860 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.426 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.943 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.201 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.460 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.719 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.978 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.236 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.236 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.494 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.494 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.494 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.494 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.494 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.494 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.752 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.011 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.011 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.011 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.011 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.011 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.011 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.012 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.012 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.271 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.530 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.789 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.047 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.306 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.306 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.306 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.306 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.306 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.306 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.306 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.564 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.822 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.822 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.822 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.822 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.822 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.822 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.823 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.823 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.081 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.340 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.340 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.340 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.340 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.340 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.340 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.340 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.340 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.597 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.597 rmmod nvme_tcp 00:07:37.856 rmmod nvme_fabrics 00:07:37.856 rmmod nvme_keyring 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4148410 ']' 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4148410 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4148410 ']' 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4148410 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148410 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148410' 00:07:37.856 killing process with pid 4148410 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4148410 00:07:37.856 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4148410 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.115 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.025 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.025 00:07:40.025 real 0m48.234s 00:07:40.025 user 3m44.427s 00:07:40.025 sys 0m16.000s 00:07:40.025 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.025 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.025 ************************************ 00:07:40.025 END TEST nvmf_ns_hotplug_stress 00:07:40.025 ************************************ 00:07:40.025 22:38:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:40.025 22:38:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.025 22:38:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.025 22:38:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.284 ************************************ 00:07:40.284 START TEST nvmf_delete_subsystem 00:07:40.284 ************************************ 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:40.284 * Looking for test storage... 00:07:40.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.284 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.285 --rc genhtml_branch_coverage=1 00:07:40.285 --rc genhtml_function_coverage=1 00:07:40.285 --rc genhtml_legend=1 00:07:40.285 --rc geninfo_all_blocks=1 00:07:40.285 --rc geninfo_unexecuted_blocks=1 00:07:40.285 00:07:40.285 ' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.285 --rc genhtml_branch_coverage=1 00:07:40.285 --rc genhtml_function_coverage=1 00:07:40.285 --rc genhtml_legend=1 00:07:40.285 --rc geninfo_all_blocks=1 00:07:40.285 --rc geninfo_unexecuted_blocks=1 00:07:40.285 00:07:40.285 ' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.285 --rc genhtml_branch_coverage=1 00:07:40.285 --rc genhtml_function_coverage=1 00:07:40.285 --rc genhtml_legend=1 00:07:40.285 --rc geninfo_all_blocks=1 00:07:40.285 --rc geninfo_unexecuted_blocks=1 00:07:40.285 00:07:40.285 ' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.285 --rc genhtml_branch_coverage=1 00:07:40.285 --rc genhtml_function_coverage=1 00:07:40.285 --rc genhtml_legend=1 00:07:40.285 --rc geninfo_all_blocks=1 00:07:40.285 --rc geninfo_unexecuted_blocks=1 00:07:40.285 00:07:40.285 ' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.285 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:42.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:42.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:42.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.831 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:42.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:42.832 00:07:42.832 --- 10.0.0.2 ping statistics --- 00:07:42.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.832 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:42.832 00:07:42.832 --- 10.0.0.1 ping statistics --- 00:07:42.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.832 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4155844 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4155844 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4155844 ']' 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.832 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.832 [2024-12-10 22:38:50.340738] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:42.832 [2024-12-10 22:38:50.340830] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.832 [2024-12-10 22:38:50.413906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.832 [2024-12-10 22:38:50.470459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.832 [2024-12-10 22:38:50.470523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.832 [2024-12-10 22:38:50.470558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.832 [2024-12-10 22:38:50.470570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.832 [2024-12-10 22:38:50.470579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.832 [2024-12-10 22:38:50.472046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.832 [2024-12-10 22:38:50.472052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 [2024-12-10 22:38:50.617011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 [2024-12-10 22:38:50.633200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 NULL1 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 Delay0 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4155869 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:43.096 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:43.097 [2024-12-10 22:38:50.718062] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:44.992 22:38:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.992 22:38:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.992 22:38:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 [2024-12-10 22:38:52.759543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18954a0 is same with the state(6) to be set 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 starting I/O failed: -6 00:07:45.250 [2024-12-10 22:38:52.760614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fde90000c80 is same with the state(6) to be set 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Write completed with error (sct=0, sc=8) 00:07:45.250 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Write completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Write completed with error (sct=0, sc=8) 00:07:45.251 Write completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Write completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:45.251 Read completed with error (sct=0, sc=8) 00:07:46.187 [2024-12-10 22:38:53.731909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18969b0 is same with the state(6) to be set 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 [2024-12-10 22:38:53.761269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fde9000d060 is same with the state(6) to be set 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 [2024-12-10 22:38:53.761488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fde9000d820 is same with the state(6) to be set 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 [2024-12-10 22:38:53.763276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18952c0 is same with the state(6) to be set 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Write completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 Read completed with error (sct=0, sc=8) 00:07:46.188 [2024-12-10 22:38:53.763889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1895680 is same with the state(6) to be set 00:07:46.188 Initializing NVMe Controllers 00:07:46.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:46.188 Controller IO queue size 128, less than required. 00:07:46.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:46.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:46.188 Initialization complete. Launching workers. 00:07:46.188 ======================================================== 00:07:46.188 Latency(us) 00:07:46.188 Device Information : IOPS MiB/s Average min max 00:07:46.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.85 0.08 924241.07 408.51 1012157.74 00:07:46.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.81 0.08 975327.85 375.33 2001809.47 00:07:46.188 ======================================================== 00:07:46.188 Total : 321.66 0.16 950257.49 375.33 2001809.47 00:07:46.188 00:07:46.188 [2024-12-10 22:38:53.764364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18969b0 (9): Bad file descriptor 00:07:46.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:46.188 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.188 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:46.188 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4155869 00:07:46.188 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4155869 00:07:46.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4155869) - No such process 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4155869 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4155869 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4155869 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 [2024-12-10 22:38:54.288288] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4156296 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:46.753 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.753 [2024-12-10 22:38:54.360934] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:47.318 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.318 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:47.318 22:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.882 22:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.882 22:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:47.882 22:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.140 22:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.140 22:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:48.140 22:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.705 22:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.705 22:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:48.705 22:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.271 22:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.271 22:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:49.271 22:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.836 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.836 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:49.836 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.836 Initializing NVMe Controllers 00:07:49.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:49.836 Controller IO queue size 128, less than required. 00:07:49.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:49.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:49.836 Initialization complete. Launching workers. 00:07:49.836 ======================================================== 00:07:49.836 Latency(us) 00:07:49.836 Device Information : IOPS MiB/s Average min max 00:07:49.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004160.29 1000212.28 1012940.03 00:07:49.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004092.41 1000176.77 1013280.05 00:07:49.836 ======================================================== 00:07:49.836 Total : 256.00 0.12 1004126.35 1000176.77 1013280.05 00:07:49.836 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4156296 00:07:50.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4156296) - No such process 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4156296 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.094 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.352 rmmod nvme_tcp 00:07:50.352 rmmod nvme_fabrics 00:07:50.352 rmmod nvme_keyring 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4155844 ']' 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4155844 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4155844 ']' 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4155844 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4155844 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4155844' 00:07:50.352 killing process with pid 4155844 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4155844 00:07:50.352 22:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4155844 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.612 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.516 00:07:52.516 real 0m12.429s 00:07:52.516 user 0m27.662s 00:07:52.516 sys 0m3.020s 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.516 ************************************ 00:07:52.516 END TEST nvmf_delete_subsystem 00:07:52.516 ************************************ 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.516 ************************************ 00:07:52.516 START TEST nvmf_host_management 00:07:52.516 ************************************ 00:07:52.516 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:52.775 * Looking for test storage... 00:07:52.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.775 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:52.775 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:52.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.776 --rc genhtml_branch_coverage=1 00:07:52.776 --rc genhtml_function_coverage=1 00:07:52.776 --rc genhtml_legend=1 00:07:52.776 --rc geninfo_all_blocks=1 00:07:52.776 --rc geninfo_unexecuted_blocks=1 00:07:52.776 00:07:52.776 ' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:52.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.776 --rc genhtml_branch_coverage=1 00:07:52.776 --rc genhtml_function_coverage=1 00:07:52.776 --rc genhtml_legend=1 00:07:52.776 --rc geninfo_all_blocks=1 00:07:52.776 --rc geninfo_unexecuted_blocks=1 00:07:52.776 00:07:52.776 ' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:52.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.776 --rc genhtml_branch_coverage=1 00:07:52.776 --rc genhtml_function_coverage=1 00:07:52.776 --rc genhtml_legend=1 00:07:52.776 --rc geninfo_all_blocks=1 00:07:52.776 --rc geninfo_unexecuted_blocks=1 00:07:52.776 00:07:52.776 ' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:52.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.776 --rc genhtml_branch_coverage=1 00:07:52.776 --rc genhtml_function_coverage=1 00:07:52.776 --rc genhtml_legend=1 00:07:52.776 --rc geninfo_all_blocks=1 00:07:52.776 --rc geninfo_unexecuted_blocks=1 00:07:52.776 00:07:52.776 ' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.776 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:52.777 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:55.312 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:55.312 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:55.312 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:55.312 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.312 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:55.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:07:55.313 00:07:55.313 --- 10.0.0.2 ping statistics --- 00:07:55.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.313 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:07:55.313 00:07:55.313 --- 10.0.0.1 ping statistics --- 00:07:55.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.313 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4158863 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4158863 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4158863 ']' 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.313 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.313 [2024-12-10 22:39:02.766480] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:55.313 [2024-12-10 22:39:02.766605] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.313 [2024-12-10 22:39:02.840713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.313 [2024-12-10 22:39:02.902283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.313 [2024-12-10 22:39:02.902341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.313 [2024-12-10 22:39:02.902370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.313 [2024-12-10 22:39:02.902382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.313 [2024-12-10 22:39:02.902392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.313 [2024-12-10 22:39:02.904154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.313 [2024-12-10 22:39:02.904215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.313 [2024-12-10 22:39:02.904282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.313 [2024-12-10 22:39:02.904285] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.313 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.313 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:55.313 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.313 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.313 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.571 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.571 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.571 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.571 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.571 [2024-12-10 22:39:03.059681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.571 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.571 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.572 Malloc0 00:07:55.572 [2024-12-10 22:39:03.139811] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4158906 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4158906 /var/tmp/bdevperf.sock 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4158906 ']' 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.572 { 00:07:55.572 "params": { 00:07:55.572 "name": "Nvme$subsystem", 00:07:55.572 "trtype": "$TEST_TRANSPORT", 00:07:55.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.572 "adrfam": "ipv4", 00:07:55.572 "trsvcid": "$NVMF_PORT", 00:07:55.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.572 "hdgst": ${hdgst:-false}, 00:07:55.572 "ddgst": ${ddgst:-false} 00:07:55.572 }, 00:07:55.572 "method": "bdev_nvme_attach_controller" 00:07:55.572 } 00:07:55.572 EOF 00:07:55.572 )") 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:55.572 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.572 "params": { 00:07:55.572 "name": "Nvme0", 00:07:55.572 "trtype": "tcp", 00:07:55.572 "traddr": "10.0.0.2", 00:07:55.572 "adrfam": "ipv4", 00:07:55.572 "trsvcid": "4420", 00:07:55.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.572 "hdgst": false, 00:07:55.572 "ddgst": false 00:07:55.572 }, 00:07:55.572 "method": "bdev_nvme_attach_controller" 00:07:55.572 }' 00:07:55.572 [2024-12-10 22:39:03.227443] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:55.572 [2024-12-10 22:39:03.227519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158906 ] 00:07:55.572 [2024-12-10 22:39:03.299916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.830 [2024-12-10 22:39:03.359909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.088 Running I/O for 10 seconds... 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.088 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.089 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.089 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:56.089 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:56.089 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:56.347 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:56.347 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:56.347 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:56.347 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:56.347 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.347 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=550 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 550 -ge 100 ']' 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.607 [2024-12-10 22:39:04.115717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.607 [2024-12-10 22:39:04.115762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.115781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.607 [2024-12-10 22:39:04.115795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.115809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.607 [2024-12-10 22:39:04.115823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.115847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.607 [2024-12-10 22:39:04.115860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.115873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dd670 is same with the state(6) to be set 00:07:56.607 [2024-12-10 22:39:04.119070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.607 [2024-12-10 22:39:04.119259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:56.607 [2024-12-10 22:39:04.119412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.607 [2024-12-10 22:39:04.119661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.607 [2024-12-10 22:39:04.119675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.119981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.119996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.608 [2024-12-10 22:39:04.120820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.608 [2024-12-10 22:39:04.120834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.120851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.120865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.120880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.120894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.120910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.120924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.120939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.120953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.120969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.120983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.120999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.121013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.121029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.121047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.121063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.609 [2024-12-10 22:39:04.121079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.609 [2024-12-10 22:39:04.122304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:56.609 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:56.609 00:07:56.609 Latency(us) 00:07:56.609 [2024-12-10T21:39:04.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.609 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:56.609 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:56.609 Verification LBA range: start 0x0 length 0x400 00:07:56.609 Nvme0n1 : 0.41 1547.82 96.74 154.78 0.00 36535.00 2439.40 35923.44 00:07:56.609 [2024-12-10T21:39:04.341Z] =================================================================================================================== 00:07:56.609 [2024-12-10T21:39:04.341Z] Total : 1547.82 96.74 154.78 0.00 36535.00 2439.40 35923.44 00:07:56.609 [2024-12-10 22:39:04.124210] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.609 [2024-12-10 22:39:04.124254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dd670 (9): Bad file descriptor 00:07:56.609 [2024-12-10 22:39:04.136041] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4158906 00:07:57.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4158906) - No such process 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.543 { 00:07:57.543 "params": { 00:07:57.543 "name": "Nvme$subsystem", 00:07:57.543 "trtype": "$TEST_TRANSPORT", 00:07:57.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.543 "adrfam": "ipv4", 00:07:57.543 "trsvcid": "$NVMF_PORT", 00:07:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.543 "hdgst": ${hdgst:-false}, 00:07:57.543 "ddgst": ${ddgst:-false} 00:07:57.543 }, 00:07:57.543 "method": "bdev_nvme_attach_controller" 00:07:57.543 } 00:07:57.543 EOF 00:07:57.543 )") 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:57.543 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.543 "params": { 00:07:57.543 "name": "Nvme0", 00:07:57.543 "trtype": "tcp", 00:07:57.543 "traddr": "10.0.0.2", 00:07:57.543 "adrfam": "ipv4", 00:07:57.543 "trsvcid": "4420", 00:07:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.543 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:57.543 "hdgst": false, 00:07:57.543 "ddgst": false 00:07:57.543 }, 00:07:57.543 "method": "bdev_nvme_attach_controller" 00:07:57.543 }' 00:07:57.543 [2024-12-10 22:39:05.173309] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:57.543 [2024-12-10 22:39:05.173389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159188 ] 00:07:57.543 [2024-12-10 22:39:05.243443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.802 [2024-12-10 22:39:05.303648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.060 Running I/O for 1 seconds... 00:07:59.254 1536.00 IOPS, 96.00 MiB/s 00:07:59.254 Latency(us) 00:07:59.254 [2024-12-10T21:39:06.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.254 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:59.254 Verification LBA range: start 0x0 length 0x400 00:07:59.254 Nvme0n1 : 1.03 1547.53 96.72 0.00 0.00 40709.93 7087.60 37865.24 00:07:59.254 [2024-12-10T21:39:06.986Z] =================================================================================================================== 00:07:59.254 [2024-12-10T21:39:06.986Z] Total : 1547.53 96.72 0.00 0.00 40709.93 7087.60 37865.24 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.254 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.254 rmmod nvme_tcp 00:07:59.254 rmmod nvme_fabrics 00:07:59.254 rmmod nvme_keyring 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4158863 ']' 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4158863 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4158863 ']' 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4158863 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4158863 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4158863' 00:07:59.513 killing process with pid 4158863 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4158863 00:07:59.513 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4158863 00:07:59.774 [2024-12-10 22:39:07.276683] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.774 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:01.676 00:08:01.676 real 0m9.114s 00:08:01.676 user 0m20.808s 00:08:01.676 sys 0m2.844s 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:01.676 ************************************ 00:08:01.676 END TEST nvmf_host_management 00:08:01.676 ************************************ 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.676 22:39:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.676 ************************************ 00:08:01.677 START TEST nvmf_lvol 00:08:01.677 ************************************ 00:08:01.677 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:01.935 * Looking for test storage... 00:08:01.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:01.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.935 --rc genhtml_branch_coverage=1 00:08:01.935 --rc genhtml_function_coverage=1 00:08:01.935 --rc genhtml_legend=1 00:08:01.935 --rc geninfo_all_blocks=1 00:08:01.935 --rc geninfo_unexecuted_blocks=1 00:08:01.935 00:08:01.935 ' 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:01.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.935 --rc genhtml_branch_coverage=1 00:08:01.935 --rc genhtml_function_coverage=1 00:08:01.935 --rc genhtml_legend=1 00:08:01.935 --rc geninfo_all_blocks=1 00:08:01.935 --rc geninfo_unexecuted_blocks=1 00:08:01.935 00:08:01.935 ' 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:01.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.935 --rc genhtml_branch_coverage=1 00:08:01.935 --rc genhtml_function_coverage=1 00:08:01.935 --rc genhtml_legend=1 00:08:01.935 --rc geninfo_all_blocks=1 00:08:01.935 --rc geninfo_unexecuted_blocks=1 00:08:01.935 00:08:01.935 ' 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:01.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.935 --rc genhtml_branch_coverage=1 00:08:01.935 --rc genhtml_function_coverage=1 00:08:01.935 --rc genhtml_legend=1 00:08:01.935 --rc geninfo_all_blocks=1 00:08:01.935 --rc geninfo_unexecuted_blocks=1 00:08:01.935 00:08:01.935 ' 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.935 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.936 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:04.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:04.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:04.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:04.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.471 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:08:04.472 00:08:04.472 --- 10.0.0.2 ping statistics --- 00:08:04.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.472 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:08:04.472 00:08:04.472 --- 10.0.0.1 ping statistics --- 00:08:04.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.472 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4161914 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4161914 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4161914 ']' 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.472 22:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.472 [2024-12-10 22:39:12.022365] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:04.472 [2024-12-10 22:39:12.022451] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.472 [2024-12-10 22:39:12.092924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.472 [2024-12-10 22:39:12.145945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.472 [2024-12-10 22:39:12.146004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.472 [2024-12-10 22:39:12.146033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.472 [2024-12-10 22:39:12.146044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.472 [2024-12-10 22:39:12.146053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.472 [2024-12-10 22:39:12.147491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.472 [2024-12-10 22:39:12.147576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.472 [2024-12-10 22:39:12.147581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.730 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.730 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:04.730 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.730 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.731 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.731 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.731 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:04.988 [2024-12-10 22:39:12.554917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.988 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:05.245 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:05.245 22:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:05.503 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:05.503 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:05.761 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:06.019 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4d5a784b-6c92-454a-833e-5b55cd229401 00:08:06.019 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d5a784b-6c92-454a-833e-5b55cd229401 lvol 20 00:08:06.277 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f9fab660-5b3f-4b1d-a6ce-b78ec152f88d 00:08:06.277 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.841 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9fab660-5b3f-4b1d-a6ce-b78ec152f88d 00:08:06.841 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:07.098 [2024-12-10 22:39:14.793677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.098 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.356 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4162341 00:08:07.356 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:07.356 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:08.729 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f9fab660-5b3f-4b1d-a6ce-b78ec152f88d MY_SNAPSHOT 00:08:08.729 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e649dde7-d9fd-420e-96de-5baf3be241a1 00:08:08.729 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f9fab660-5b3f-4b1d-a6ce-b78ec152f88d 30 00:08:08.987 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e649dde7-d9fd-420e-96de-5baf3be241a1 MY_CLONE 00:08:09.553 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4818302c-7dc8-496c-8126-1c717cbfbf45 00:08:09.553 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4818302c-7dc8-496c-8126-1c717cbfbf45 00:08:10.119 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4162341 00:08:18.229 Initializing NVMe Controllers 00:08:18.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:18.229 Controller IO queue size 128, less than required. 00:08:18.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:18.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:18.230 Initialization complete. Launching workers. 00:08:18.230 ======================================================== 00:08:18.230 Latency(us) 00:08:18.230 Device Information : IOPS MiB/s Average min max 00:08:18.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10595.70 41.39 12082.57 1070.81 132318.06 00:08:18.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10535.90 41.16 12157.39 2118.07 51714.89 00:08:18.230 ======================================================== 00:08:18.230 Total : 21131.60 82.55 12119.87 1070.81 132318.06 00:08:18.230 00:08:18.230 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:18.230 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9fab660-5b3f-4b1d-a6ce-b78ec152f88d 00:08:18.488 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d5a784b-6c92-454a-833e-5b55cd229401 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:18.745 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.746 rmmod nvme_tcp 00:08:18.746 rmmod nvme_fabrics 00:08:18.746 rmmod nvme_keyring 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4161914 ']' 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4161914 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4161914 ']' 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4161914 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4161914 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4161914' 00:08:18.746 killing process with pid 4161914 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4161914 00:08:18.746 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4161914 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.005 22:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.544 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.544 00:08:21.545 real 0m19.319s 00:08:21.545 user 1m5.737s 00:08:21.545 sys 0m5.460s 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.545 ************************************ 00:08:21.545 END TEST nvmf_lvol 00:08:21.545 ************************************ 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.545 ************************************ 00:08:21.545 START TEST nvmf_lvs_grow 00:08:21.545 ************************************ 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.545 * Looking for test storage... 00:08:21.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.545 --rc genhtml_branch_coverage=1 00:08:21.545 --rc genhtml_function_coverage=1 00:08:21.545 --rc genhtml_legend=1 00:08:21.545 --rc geninfo_all_blocks=1 00:08:21.545 --rc geninfo_unexecuted_blocks=1 00:08:21.545 00:08:21.545 ' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.545 --rc genhtml_branch_coverage=1 00:08:21.545 --rc genhtml_function_coverage=1 00:08:21.545 --rc genhtml_legend=1 00:08:21.545 --rc geninfo_all_blocks=1 00:08:21.545 --rc geninfo_unexecuted_blocks=1 00:08:21.545 00:08:21.545 ' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.545 --rc genhtml_branch_coverage=1 00:08:21.545 --rc genhtml_function_coverage=1 00:08:21.545 --rc genhtml_legend=1 00:08:21.545 --rc geninfo_all_blocks=1 00:08:21.545 --rc geninfo_unexecuted_blocks=1 00:08:21.545 00:08:21.545 ' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.545 --rc genhtml_branch_coverage=1 00:08:21.545 --rc genhtml_function_coverage=1 00:08:21.545 --rc genhtml_legend=1 00:08:21.545 --rc geninfo_all_blocks=1 00:08:21.545 --rc geninfo_unexecuted_blocks=1 00:08:21.545 00:08:21.545 ' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.545 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.546 22:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.452 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.452 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.452 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.453 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:08:23.712 00:08:23.712 --- 10.0.0.2 ping statistics --- 00:08:23.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.712 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:23.712 00:08:23.712 --- 10.0.0.1 ping statistics --- 00:08:23.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.712 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4165628 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4165628 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4165628 ']' 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.712 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.712 [2024-12-10 22:39:31.320898] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:23.712 [2024-12-10 22:39:31.321000] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.712 [2024-12-10 22:39:31.393904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.970 [2024-12-10 22:39:31.453929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.970 [2024-12-10 22:39:31.453979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.970 [2024-12-10 22:39:31.454007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.970 [2024-12-10 22:39:31.454018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.970 [2024-12-10 22:39:31.454029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.970 [2024-12-10 22:39:31.454700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.970 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.970 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:23.970 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.970 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.970 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.970 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.970 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.228 [2024-12-10 22:39:31.852137] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.229 ************************************ 00:08:24.229 START TEST lvs_grow_clean 00:08:24.229 ************************************ 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.229 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.486 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:24.486 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.744 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d4acd263-1652-409d-b04a-8868b1aa3862 00:08:24.744 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:24.744 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.002 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.002 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.002 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d4acd263-1652-409d-b04a-8868b1aa3862 lvol 150 00:08:25.259 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8a14fba8-535e-4b7b-954c-2f5ea074574f 00:08:25.259 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.259 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:25.517 [2024-12-10 22:39:33.241944] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:25.517 [2024-12-10 22:39:33.242039] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:25.517 true 00:08:25.774 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:25.774 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.032 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.032 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.290 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a14fba8-535e-4b7b-954c-2f5ea074574f 00:08:26.548 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.807 [2024-12-10 22:39:34.321226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.807 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4166065 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4166065 /var/tmp/bdevperf.sock 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4166065 ']' 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.065 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 [2024-12-10 22:39:34.654087] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:27.065 [2024-12-10 22:39:34.654177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166065 ] 00:08:27.065 [2024-12-10 22:39:34.723048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.065 [2024-12-10 22:39:34.782580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.323 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.323 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:27.323 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.580 Nvme0n1 00:08:27.580 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.838 [ 00:08:27.838 { 00:08:27.838 "name": "Nvme0n1", 00:08:27.838 "aliases": [ 00:08:27.838 "8a14fba8-535e-4b7b-954c-2f5ea074574f" 00:08:27.838 ], 00:08:27.838 "product_name": "NVMe disk", 00:08:27.838 "block_size": 4096, 00:08:27.838 "num_blocks": 38912, 00:08:27.838 "uuid": "8a14fba8-535e-4b7b-954c-2f5ea074574f", 00:08:27.838 "numa_id": 0, 00:08:27.838 "assigned_rate_limits": { 00:08:27.838 "rw_ios_per_sec": 0, 00:08:27.838 "rw_mbytes_per_sec": 0, 00:08:27.838 "r_mbytes_per_sec": 0, 00:08:27.838 "w_mbytes_per_sec": 0 00:08:27.838 }, 00:08:27.838 "claimed": false, 00:08:27.838 "zoned": false, 00:08:27.838 "supported_io_types": { 00:08:27.838 "read": true, 00:08:27.838 "write": true, 00:08:27.838 "unmap": true, 00:08:27.838 "flush": true, 00:08:27.838 "reset": true, 00:08:27.838 "nvme_admin": true, 00:08:27.838 "nvme_io": true, 00:08:27.838 "nvme_io_md": false, 00:08:27.838 "write_zeroes": true, 00:08:27.838 "zcopy": false, 00:08:27.838 "get_zone_info": false, 00:08:27.838 "zone_management": false, 00:08:27.838 "zone_append": false, 00:08:27.838 "compare": true, 00:08:27.838 "compare_and_write": true, 00:08:27.838 "abort": true, 00:08:27.838 "seek_hole": false, 00:08:27.838 "seek_data": false, 00:08:27.838 "copy": true, 00:08:27.838 "nvme_iov_md": false 00:08:27.838 }, 00:08:27.838 "memory_domains": [ 00:08:27.838 { 00:08:27.838 "dma_device_id": "system", 00:08:27.838 "dma_device_type": 1 00:08:27.838 } 00:08:27.838 ], 00:08:27.838 "driver_specific": { 00:08:27.838 "nvme": [ 00:08:27.838 { 00:08:27.838 "trid": { 00:08:27.838 "trtype": "TCP", 00:08:27.838 "adrfam": "IPv4", 00:08:27.838 "traddr": "10.0.0.2", 00:08:27.838 "trsvcid": "4420", 00:08:27.838 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.838 }, 00:08:27.838 "ctrlr_data": { 00:08:27.838 "cntlid": 1, 00:08:27.838 "vendor_id": "0x8086", 00:08:27.838 "model_number": "SPDK bdev Controller", 00:08:27.838 "serial_number": "SPDK0", 00:08:27.838 "firmware_revision": "25.01", 00:08:27.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.838 "oacs": { 00:08:27.838 "security": 0, 00:08:27.838 "format": 0, 00:08:27.838 "firmware": 0, 00:08:27.838 "ns_manage": 0 00:08:27.838 }, 00:08:27.838 "multi_ctrlr": true, 00:08:27.838 "ana_reporting": false 00:08:27.838 }, 00:08:27.838 "vs": { 00:08:27.838 "nvme_version": "1.3" 00:08:27.838 }, 00:08:27.838 "ns_data": { 00:08:27.838 "id": 1, 00:08:27.838 "can_share": true 00:08:27.838 } 00:08:27.838 } 00:08:27.838 ], 00:08:27.838 "mp_policy": "active_passive" 00:08:27.838 } 00:08:27.838 } 00:08:27.838 ] 00:08:27.838 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4166199 00:08:27.838 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.838 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.101 Running I/O for 10 seconds... 00:08:29.101 Latency(us) 00:08:29.101 [2024-12-10T21:39:36.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.102 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:29.102 [2024-12-10T21:39:36.834Z] =================================================================================================================== 00:08:29.102 [2024-12-10T21:39:36.834Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:29.102 00:08:30.035 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:30.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.036 Nvme0n1 : 2.00 15082.50 58.92 0.00 0.00 0.00 0.00 0.00 00:08:30.036 [2024-12-10T21:39:37.768Z] =================================================================================================================== 00:08:30.036 [2024-12-10T21:39:37.768Z] Total : 15082.50 58.92 0.00 0.00 0.00 0.00 0.00 00:08:30.036 00:08:30.293 true 00:08:30.293 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:30.293 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:30.551 22:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:30.551 22:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:30.551 22:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4166199 00:08:31.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.117 Nvme0n1 : 3.00 15178.00 59.29 0.00 0.00 0.00 0.00 0.00 00:08:31.117 [2024-12-10T21:39:38.849Z] =================================================================================================================== 00:08:31.117 [2024-12-10T21:39:38.849Z] Total : 15178.00 59.29 0.00 0.00 0.00 0.00 0.00 00:08:31.117 00:08:32.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.051 Nvme0n1 : 4.00 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:08:32.051 [2024-12-10T21:39:39.783Z] =================================================================================================================== 00:08:32.051 [2024-12-10T21:39:39.783Z] Total : 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:08:32.051 00:08:32.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.983 Nvme0n1 : 5.00 15355.40 59.98 0.00 0.00 0.00 0.00 0.00 00:08:32.983 [2024-12-10T21:39:40.715Z] =================================================================================================================== 00:08:32.983 [2024-12-10T21:39:40.715Z] Total : 15355.40 59.98 0.00 0.00 0.00 0.00 0.00 00:08:32.983 00:08:34.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.357 Nvme0n1 : 6.00 15420.83 60.24 0.00 0.00 0.00 0.00 0.00 00:08:34.357 [2024-12-10T21:39:42.089Z] =================================================================================================================== 00:08:34.357 [2024-12-10T21:39:42.089Z] Total : 15420.83 60.24 0.00 0.00 0.00 0.00 0.00 00:08:34.357 00:08:35.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.291 Nvme0n1 : 7.00 15467.57 60.42 0.00 0.00 0.00 0.00 0.00 00:08:35.291 [2024-12-10T21:39:43.023Z] =================================================================================================================== 00:08:35.291 [2024-12-10T21:39:43.023Z] Total : 15467.57 60.42 0.00 0.00 0.00 0.00 0.00 00:08:35.291 00:08:36.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.225 Nvme0n1 : 8.00 15518.62 60.62 0.00 0.00 0.00 0.00 0.00 00:08:36.225 [2024-12-10T21:39:43.957Z] =================================================================================================================== 00:08:36.225 [2024-12-10T21:39:43.957Z] Total : 15518.62 60.62 0.00 0.00 0.00 0.00 0.00 00:08:36.225 00:08:37.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.159 Nvme0n1 : 9.00 15558.22 60.77 0.00 0.00 0.00 0.00 0.00 00:08:37.159 [2024-12-10T21:39:44.891Z] =================================================================================================================== 00:08:37.159 [2024-12-10T21:39:44.891Z] Total : 15558.22 60.77 0.00 0.00 0.00 0.00 0.00 00:08:37.159 00:08:38.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.092 Nvme0n1 : 10.00 15589.90 60.90 0.00 0.00 0.00 0.00 0.00 00:08:38.092 [2024-12-10T21:39:45.824Z] =================================================================================================================== 00:08:38.092 [2024-12-10T21:39:45.824Z] Total : 15589.90 60.90 0.00 0.00 0.00 0.00 0.00 00:08:38.092 00:08:38.092 00:08:38.092 Latency(us) 00:08:38.092 [2024-12-10T21:39:45.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.092 Nvme0n1 : 10.01 15590.43 60.90 0.00 0.00 8205.28 5121.52 15825.73 00:08:38.092 [2024-12-10T21:39:45.824Z] =================================================================================================================== 00:08:38.092 [2024-12-10T21:39:45.824Z] Total : 15590.43 60.90 0.00 0.00 8205.28 5121.52 15825.73 00:08:38.092 { 00:08:38.092 "results": [ 00:08:38.092 { 00:08:38.092 "job": "Nvme0n1", 00:08:38.092 "core_mask": "0x2", 00:08:38.092 "workload": "randwrite", 00:08:38.092 "status": "finished", 00:08:38.092 "queue_depth": 128, 00:08:38.092 "io_size": 4096, 00:08:38.092 "runtime": 10.007873, 00:08:38.092 "iops": 15590.425657879552, 00:08:38.092 "mibps": 60.900100226092, 00:08:38.092 "io_failed": 0, 00:08:38.092 "io_timeout": 0, 00:08:38.092 "avg_latency_us": 8205.281733299245, 00:08:38.093 "min_latency_us": 5121.517037037037, 00:08:38.093 "max_latency_us": 15825.730370370371 00:08:38.093 } 00:08:38.093 ], 00:08:38.093 "core_count": 1 00:08:38.093 } 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4166065 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4166065 ']' 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4166065 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4166065 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4166065' 00:08:38.093 killing process with pid 4166065 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4166065 00:08:38.093 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.093 00:08:38.093 Latency(us) 00:08:38.093 [2024-12-10T21:39:45.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.093 [2024-12-10T21:39:45.825Z] =================================================================================================================== 00:08:38.093 [2024-12-10T21:39:45.825Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.093 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4166065 00:08:38.350 22:39:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.608 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.866 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:38.866 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:39.124 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:39.124 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:39.124 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.382 [2024-12-10 22:39:47.038355] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.382 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:39.640 request: 00:08:39.640 { 00:08:39.640 "uuid": "d4acd263-1652-409d-b04a-8868b1aa3862", 00:08:39.640 "method": "bdev_lvol_get_lvstores", 00:08:39.640 "req_id": 1 00:08:39.640 } 00:08:39.640 Got JSON-RPC error response 00:08:39.640 response: 00:08:39.640 { 00:08:39.640 "code": -19, 00:08:39.640 "message": "No such device" 00:08:39.640 } 00:08:39.640 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:39.640 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.640 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:39.640 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.640 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.899 aio_bdev 00:08:39.899 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8a14fba8-535e-4b7b-954c-2f5ea074574f 00:08:39.899 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8a14fba8-535e-4b7b-954c-2f5ea074574f 00:08:39.899 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.899 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:39.899 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.899 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.899 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.465 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a14fba8-535e-4b7b-954c-2f5ea074574f -t 2000 00:08:40.465 [ 00:08:40.465 { 00:08:40.465 "name": "8a14fba8-535e-4b7b-954c-2f5ea074574f", 00:08:40.465 "aliases": [ 00:08:40.465 "lvs/lvol" 00:08:40.465 ], 00:08:40.465 "product_name": "Logical Volume", 00:08:40.465 "block_size": 4096, 00:08:40.465 "num_blocks": 38912, 00:08:40.465 "uuid": "8a14fba8-535e-4b7b-954c-2f5ea074574f", 00:08:40.465 "assigned_rate_limits": { 00:08:40.465 "rw_ios_per_sec": 0, 00:08:40.465 "rw_mbytes_per_sec": 0, 00:08:40.465 "r_mbytes_per_sec": 0, 00:08:40.465 "w_mbytes_per_sec": 0 00:08:40.465 }, 00:08:40.465 "claimed": false, 00:08:40.465 "zoned": false, 00:08:40.465 "supported_io_types": { 00:08:40.465 "read": true, 00:08:40.465 "write": true, 00:08:40.465 "unmap": true, 00:08:40.465 "flush": false, 00:08:40.465 "reset": true, 00:08:40.465 "nvme_admin": false, 00:08:40.465 "nvme_io": false, 00:08:40.465 "nvme_io_md": false, 00:08:40.465 "write_zeroes": true, 00:08:40.465 "zcopy": false, 00:08:40.465 "get_zone_info": false, 00:08:40.465 "zone_management": false, 00:08:40.465 "zone_append": false, 00:08:40.465 "compare": false, 00:08:40.465 "compare_and_write": false, 00:08:40.465 "abort": false, 00:08:40.465 "seek_hole": true, 00:08:40.465 "seek_data": true, 00:08:40.465 "copy": false, 00:08:40.465 "nvme_iov_md": false 00:08:40.465 }, 00:08:40.465 "driver_specific": { 00:08:40.465 "lvol": { 00:08:40.465 "lvol_store_uuid": "d4acd263-1652-409d-b04a-8868b1aa3862", 00:08:40.465 "base_bdev": "aio_bdev", 00:08:40.465 "thin_provision": false, 00:08:40.465 "num_allocated_clusters": 38, 00:08:40.465 "snapshot": false, 00:08:40.465 "clone": false, 00:08:40.465 "esnap_clone": false 00:08:40.465 } 00:08:40.465 } 00:08:40.465 } 00:08:40.465 ] 00:08:40.465 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:40.465 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:40.465 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:40.723 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:40.723 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:40.723 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:40.981 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:40.981 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8a14fba8-535e-4b7b-954c-2f5ea074574f 00:08:41.547 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4acd263-1652-409d-b04a-8868b1aa3862 00:08:41.547 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.805 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.063 00:08:42.063 real 0m17.666s 00:08:42.063 user 0m17.228s 00:08:42.063 sys 0m1.801s 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:42.063 ************************************ 00:08:42.063 END TEST lvs_grow_clean 00:08:42.063 ************************************ 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.063 ************************************ 00:08:42.063 START TEST lvs_grow_dirty 00:08:42.063 ************************************ 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.063 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.321 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:42.321 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:42.579 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:42.579 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:42.579 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:42.836 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:42.836 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:42.836 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e lvol 150 00:08:43.094 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=079b872c-8339-49ba-a633-23ef271529c5 00:08:43.095 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.095 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:43.353 [2024-12-10 22:39:50.963052] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:43.353 [2024-12-10 22:39:50.963142] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:43.353 true 00:08:43.353 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:43.353 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:43.610 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:43.610 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.868 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 079b872c-8339-49ba-a633-23ef271529c5 00:08:44.126 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:44.384 [2024-12-10 22:39:52.034280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.384 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.642 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4168257 00:08:44.642 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:44.642 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.642 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4168257 /var/tmp/bdevperf.sock 00:08:44.643 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4168257 ']' 00:08:44.643 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.643 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.643 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.643 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.643 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.643 [2024-12-10 22:39:52.361751] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:44.643 [2024-12-10 22:39:52.361845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168257 ] 00:08:44.901 [2024-12-10 22:39:52.429257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.901 [2024-12-10 22:39:52.486117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.901 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.901 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:44.901 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:45.466 Nvme0n1 00:08:45.466 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:45.724 [ 00:08:45.724 { 00:08:45.724 "name": "Nvme0n1", 00:08:45.724 "aliases": [ 00:08:45.724 "079b872c-8339-49ba-a633-23ef271529c5" 00:08:45.724 ], 00:08:45.724 "product_name": "NVMe disk", 00:08:45.724 "block_size": 4096, 00:08:45.724 "num_blocks": 38912, 00:08:45.724 "uuid": "079b872c-8339-49ba-a633-23ef271529c5", 00:08:45.724 "numa_id": 0, 00:08:45.724 "assigned_rate_limits": { 00:08:45.724 "rw_ios_per_sec": 0, 00:08:45.724 "rw_mbytes_per_sec": 0, 00:08:45.724 "r_mbytes_per_sec": 0, 00:08:45.724 "w_mbytes_per_sec": 0 00:08:45.724 }, 00:08:45.724 "claimed": false, 00:08:45.724 "zoned": false, 00:08:45.724 "supported_io_types": { 00:08:45.724 "read": true, 00:08:45.724 "write": true, 00:08:45.724 "unmap": true, 00:08:45.724 "flush": true, 00:08:45.724 "reset": true, 00:08:45.724 "nvme_admin": true, 00:08:45.724 "nvme_io": true, 00:08:45.724 "nvme_io_md": false, 00:08:45.724 "write_zeroes": true, 00:08:45.724 "zcopy": false, 00:08:45.724 "get_zone_info": false, 00:08:45.724 "zone_management": false, 00:08:45.724 "zone_append": false, 00:08:45.724 "compare": true, 00:08:45.724 "compare_and_write": true, 00:08:45.724 "abort": true, 00:08:45.724 "seek_hole": false, 00:08:45.724 "seek_data": false, 00:08:45.724 "copy": true, 00:08:45.724 "nvme_iov_md": false 00:08:45.724 }, 00:08:45.724 "memory_domains": [ 00:08:45.724 { 00:08:45.724 "dma_device_id": "system", 00:08:45.724 "dma_device_type": 1 00:08:45.724 } 00:08:45.724 ], 00:08:45.724 "driver_specific": { 00:08:45.724 "nvme": [ 00:08:45.724 { 00:08:45.724 "trid": { 00:08:45.724 "trtype": "TCP", 00:08:45.724 "adrfam": "IPv4", 00:08:45.724 "traddr": "10.0.0.2", 00:08:45.724 "trsvcid": "4420", 00:08:45.724 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:45.724 }, 00:08:45.724 "ctrlr_data": { 00:08:45.724 "cntlid": 1, 00:08:45.724 "vendor_id": "0x8086", 00:08:45.724 "model_number": "SPDK bdev Controller", 00:08:45.724 "serial_number": "SPDK0", 00:08:45.724 "firmware_revision": "25.01", 00:08:45.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.724 "oacs": { 00:08:45.724 "security": 0, 00:08:45.724 "format": 0, 00:08:45.724 "firmware": 0, 00:08:45.724 "ns_manage": 0 00:08:45.724 }, 00:08:45.724 "multi_ctrlr": true, 00:08:45.724 "ana_reporting": false 00:08:45.724 }, 00:08:45.724 "vs": { 00:08:45.724 "nvme_version": "1.3" 00:08:45.724 }, 00:08:45.724 "ns_data": { 00:08:45.724 "id": 1, 00:08:45.724 "can_share": true 00:08:45.724 } 00:08:45.724 } 00:08:45.724 ], 00:08:45.724 "mp_policy": "active_passive" 00:08:45.724 } 00:08:45.724 } 00:08:45.724 ] 00:08:45.724 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4168395 00:08:45.724 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:45.724 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:45.724 Running I/O for 10 seconds... 00:08:47.098 Latency(us) 00:08:47.098 [2024-12-10T21:39:54.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.098 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:47.098 [2024-12-10T21:39:54.830Z] =================================================================================================================== 00:08:47.098 [2024-12-10T21:39:54.830Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:47.098 00:08:47.664 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:47.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.922 Nvme0n1 : 2.00 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:08:47.922 [2024-12-10T21:39:55.654Z] =================================================================================================================== 00:08:47.922 [2024-12-10T21:39:55.654Z] Total : 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:08:47.922 00:08:47.922 true 00:08:47.922 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:47.922 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:48.180 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:48.180 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:48.180 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4168395 00:08:48.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.747 Nvme0n1 : 3.00 15198.00 59.37 0.00 0.00 0.00 0.00 0.00 00:08:48.747 [2024-12-10T21:39:56.479Z] =================================================================================================================== 00:08:48.747 [2024-12-10T21:39:56.479Z] Total : 15198.00 59.37 0.00 0.00 0.00 0.00 0.00 00:08:48.747 00:08:49.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.681 Nvme0n1 : 4.00 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:08:49.681 [2024-12-10T21:39:57.413Z] =================================================================================================================== 00:08:49.681 [2024-12-10T21:39:57.413Z] Total : 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:08:49.681 00:08:51.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.090 Nvme0n1 : 5.00 15368.40 60.03 0.00 0.00 0.00 0.00 0.00 00:08:51.090 [2024-12-10T21:39:58.822Z] =================================================================================================================== 00:08:51.090 [2024-12-10T21:39:58.822Z] Total : 15368.40 60.03 0.00 0.00 0.00 0.00 0.00 00:08:51.090 00:08:52.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.022 Nvme0n1 : 6.00 15432.17 60.28 0.00 0.00 0.00 0.00 0.00 00:08:52.022 [2024-12-10T21:39:59.754Z] =================================================================================================================== 00:08:52.022 [2024-12-10T21:39:59.754Z] Total : 15432.17 60.28 0.00 0.00 0.00 0.00 0.00 00:08:52.022 00:08:52.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.957 Nvme0n1 : 7.00 15468.43 60.42 0.00 0.00 0.00 0.00 0.00 00:08:52.957 [2024-12-10T21:40:00.689Z] =================================================================================================================== 00:08:52.957 [2024-12-10T21:40:00.689Z] Total : 15468.43 60.42 0.00 0.00 0.00 0.00 0.00 00:08:52.957 00:08:53.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.892 Nvme0n1 : 8.00 15511.38 60.59 0.00 0.00 0.00 0.00 0.00 00:08:53.892 [2024-12-10T21:40:01.624Z] =================================================================================================================== 00:08:53.892 [2024-12-10T21:40:01.624Z] Total : 15511.38 60.59 0.00 0.00 0.00 0.00 0.00 00:08:53.892 00:08:54.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.828 Nvme0n1 : 9.00 15546.56 60.73 0.00 0.00 0.00 0.00 0.00 00:08:54.828 [2024-12-10T21:40:02.560Z] =================================================================================================================== 00:08:54.828 [2024-12-10T21:40:02.560Z] Total : 15546.56 60.73 0.00 0.00 0.00 0.00 0.00 00:08:54.828 00:08:55.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.763 Nvme0n1 : 10.00 15592.10 60.91 0.00 0.00 0.00 0.00 0.00 00:08:55.763 [2024-12-10T21:40:03.495Z] =================================================================================================================== 00:08:55.763 [2024-12-10T21:40:03.495Z] Total : 15592.10 60.91 0.00 0.00 0.00 0.00 0.00 00:08:55.763 00:08:55.763 00:08:55.763 Latency(us) 00:08:55.763 [2024-12-10T21:40:03.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.763 Nvme0n1 : 10.01 15593.55 60.91 0.00 0.00 8203.70 4320.52 16019.91 00:08:55.763 [2024-12-10T21:40:03.495Z] =================================================================================================================== 00:08:55.763 [2024-12-10T21:40:03.495Z] Total : 15593.55 60.91 0.00 0.00 8203.70 4320.52 16019.91 00:08:55.763 { 00:08:55.763 "results": [ 00:08:55.763 { 00:08:55.763 "job": "Nvme0n1", 00:08:55.763 "core_mask": "0x2", 00:08:55.763 "workload": "randwrite", 00:08:55.763 "status": "finished", 00:08:55.763 "queue_depth": 128, 00:08:55.763 "io_size": 4096, 00:08:55.763 "runtime": 10.007281, 00:08:55.763 "iops": 15593.546338910639, 00:08:55.763 "mibps": 60.91229038636968, 00:08:55.763 "io_failed": 0, 00:08:55.763 "io_timeout": 0, 00:08:55.763 "avg_latency_us": 8203.70092672221, 00:08:55.763 "min_latency_us": 4320.521481481482, 00:08:55.763 "max_latency_us": 16019.91111111111 00:08:55.763 } 00:08:55.763 ], 00:08:55.763 "core_count": 1 00:08:55.763 } 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4168257 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4168257 ']' 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4168257 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4168257 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4168257' 00:08:55.763 killing process with pid 4168257 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4168257 00:08:55.763 Received shutdown signal, test time was about 10.000000 seconds 00:08:55.763 00:08:55.763 Latency(us) 00:08:55.763 [2024-12-10T21:40:03.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.763 [2024-12-10T21:40:03.495Z] =================================================================================================================== 00:08:55.763 [2024-12-10T21:40:03.495Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:55.763 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4168257 00:08:56.021 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.279 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.537 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:56.537 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4165628 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4165628 00:08:57.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4165628 Killed "${NVMF_APP[@]}" "$@" 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4169738 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4169738 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4169738 ']' 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.102 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.103 [2024-12-10 22:40:04.616934] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:57.103 [2024-12-10 22:40:04.617035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.103 [2024-12-10 22:40:04.690245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.103 [2024-12-10 22:40:04.748297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.103 [2024-12-10 22:40:04.748373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.103 [2024-12-10 22:40:04.748408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.103 [2024-12-10 22:40:04.748420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.103 [2024-12-10 22:40:04.748429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.103 [2024-12-10 22:40:04.749084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.360 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.360 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:57.360 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.360 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.360 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.360 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.360 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.618 [2024-12-10 22:40:05.141179] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:57.618 [2024-12-10 22:40:05.141312] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:57.618 [2024-12-10 22:40:05.141359] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:57.618 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:57.618 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 079b872c-8339-49ba-a633-23ef271529c5 00:08:57.618 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=079b872c-8339-49ba-a633-23ef271529c5 00:08:57.618 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.618 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:57.618 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.619 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.619 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.876 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 079b872c-8339-49ba-a633-23ef271529c5 -t 2000 00:08:58.134 [ 00:08:58.134 { 00:08:58.134 "name": "079b872c-8339-49ba-a633-23ef271529c5", 00:08:58.134 "aliases": [ 00:08:58.134 "lvs/lvol" 00:08:58.134 ], 00:08:58.134 "product_name": "Logical Volume", 00:08:58.134 "block_size": 4096, 00:08:58.134 "num_blocks": 38912, 00:08:58.134 "uuid": "079b872c-8339-49ba-a633-23ef271529c5", 00:08:58.134 "assigned_rate_limits": { 00:08:58.134 "rw_ios_per_sec": 0, 00:08:58.134 "rw_mbytes_per_sec": 0, 00:08:58.134 "r_mbytes_per_sec": 0, 00:08:58.134 "w_mbytes_per_sec": 0 00:08:58.134 }, 00:08:58.134 "claimed": false, 00:08:58.134 "zoned": false, 00:08:58.134 "supported_io_types": { 00:08:58.134 "read": true, 00:08:58.134 "write": true, 00:08:58.134 "unmap": true, 00:08:58.134 "flush": false, 00:08:58.134 "reset": true, 00:08:58.134 "nvme_admin": false, 00:08:58.134 "nvme_io": false, 00:08:58.134 "nvme_io_md": false, 00:08:58.134 "write_zeroes": true, 00:08:58.135 "zcopy": false, 00:08:58.135 "get_zone_info": false, 00:08:58.135 "zone_management": false, 00:08:58.135 "zone_append": false, 00:08:58.135 "compare": false, 00:08:58.135 "compare_and_write": false, 00:08:58.135 "abort": false, 00:08:58.135 "seek_hole": true, 00:08:58.135 "seek_data": true, 00:08:58.135 "copy": false, 00:08:58.135 "nvme_iov_md": false 00:08:58.135 }, 00:08:58.135 "driver_specific": { 00:08:58.135 "lvol": { 00:08:58.135 "lvol_store_uuid": "3fd0941b-23bb-4982-b3d7-79bea27fa51e", 00:08:58.135 "base_bdev": "aio_bdev", 00:08:58.135 "thin_provision": false, 00:08:58.135 "num_allocated_clusters": 38, 00:08:58.135 "snapshot": false, 00:08:58.135 "clone": false, 00:08:58.135 "esnap_clone": false 00:08:58.135 } 00:08:58.135 } 00:08:58.135 } 00:08:58.135 ] 00:08:58.135 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:58.135 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:58.135 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:58.393 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:58.393 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:58.393 22:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:58.651 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:58.651 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.909 [2024-12-10 22:40:06.522579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:58.909 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:59.168 request: 00:08:59.168 { 00:08:59.168 "uuid": "3fd0941b-23bb-4982-b3d7-79bea27fa51e", 00:08:59.168 "method": "bdev_lvol_get_lvstores", 00:08:59.168 "req_id": 1 00:08:59.168 } 00:08:59.168 Got JSON-RPC error response 00:08:59.168 response: 00:08:59.168 { 00:08:59.168 "code": -19, 00:08:59.168 "message": "No such device" 00:08:59.168 } 00:08:59.168 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:59.168 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:59.168 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:59.168 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:59.168 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.426 aio_bdev 00:08:59.426 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 079b872c-8339-49ba-a633-23ef271529c5 00:08:59.426 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=079b872c-8339-49ba-a633-23ef271529c5 00:08:59.426 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.426 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:59.426 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.426 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.426 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.684 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 079b872c-8339-49ba-a633-23ef271529c5 -t 2000 00:08:59.941 [ 00:08:59.941 { 00:08:59.941 "name": "079b872c-8339-49ba-a633-23ef271529c5", 00:08:59.941 "aliases": [ 00:08:59.941 "lvs/lvol" 00:08:59.941 ], 00:08:59.941 "product_name": "Logical Volume", 00:08:59.941 "block_size": 4096, 00:08:59.941 "num_blocks": 38912, 00:08:59.941 "uuid": "079b872c-8339-49ba-a633-23ef271529c5", 00:08:59.941 "assigned_rate_limits": { 00:08:59.941 "rw_ios_per_sec": 0, 00:08:59.941 "rw_mbytes_per_sec": 0, 00:08:59.941 "r_mbytes_per_sec": 0, 00:08:59.941 "w_mbytes_per_sec": 0 00:08:59.941 }, 00:08:59.941 "claimed": false, 00:08:59.941 "zoned": false, 00:08:59.941 "supported_io_types": { 00:08:59.941 "read": true, 00:08:59.941 "write": true, 00:08:59.941 "unmap": true, 00:08:59.941 "flush": false, 00:08:59.941 "reset": true, 00:08:59.941 "nvme_admin": false, 00:08:59.941 "nvme_io": false, 00:08:59.941 "nvme_io_md": false, 00:08:59.941 "write_zeroes": true, 00:08:59.941 "zcopy": false, 00:08:59.941 "get_zone_info": false, 00:08:59.941 "zone_management": false, 00:08:59.941 "zone_append": false, 00:08:59.941 "compare": false, 00:08:59.941 "compare_and_write": false, 00:08:59.941 "abort": false, 00:08:59.941 "seek_hole": true, 00:08:59.941 "seek_data": true, 00:08:59.941 "copy": false, 00:08:59.941 "nvme_iov_md": false 00:08:59.941 }, 00:08:59.941 "driver_specific": { 00:08:59.941 "lvol": { 00:08:59.941 "lvol_store_uuid": "3fd0941b-23bb-4982-b3d7-79bea27fa51e", 00:08:59.941 "base_bdev": "aio_bdev", 00:08:59.941 "thin_provision": false, 00:08:59.941 "num_allocated_clusters": 38, 00:08:59.941 "snapshot": false, 00:08:59.941 "clone": false, 00:08:59.941 "esnap_clone": false 00:08:59.941 } 00:08:59.941 } 00:08:59.941 } 00:08:59.941 ] 00:08:59.942 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:59.942 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:08:59.942 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:00.200 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:00.200 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:09:00.200 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:00.766 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:00.766 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 079b872c-8339-49ba-a633-23ef271529c5 00:09:00.766 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fd0941b-23bb-4982-b3d7-79bea27fa51e 00:09:01.332 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:01.332 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.332 00:09:01.332 real 0m19.451s 00:09:01.332 user 0m49.259s 00:09:01.332 sys 0m4.501s 00:09:01.332 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.332 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.332 ************************************ 00:09:01.332 END TEST lvs_grow_dirty 00:09:01.332 ************************************ 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:01.590 nvmf_trace.0 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.590 rmmod nvme_tcp 00:09:01.590 rmmod nvme_fabrics 00:09:01.590 rmmod nvme_keyring 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4169738 ']' 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4169738 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4169738 ']' 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4169738 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4169738 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4169738' 00:09:01.590 killing process with pid 4169738 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4169738 00:09:01.590 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4169738 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.850 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.757 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:03.757 00:09:03.757 real 0m42.681s 00:09:03.757 user 1m12.574s 00:09:03.757 sys 0m8.339s 00:09:03.757 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.757 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:03.757 ************************************ 00:09:03.757 END TEST nvmf_lvs_grow 00:09:03.757 ************************************ 00:09:03.757 22:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:03.757 22:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.757 22:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.757 22:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.017 ************************************ 00:09:04.017 START TEST nvmf_bdev_io_wait 00:09:04.017 ************************************ 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:04.017 * Looking for test storage... 00:09:04.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.017 --rc genhtml_branch_coverage=1 00:09:04.017 --rc genhtml_function_coverage=1 00:09:04.017 --rc genhtml_legend=1 00:09:04.017 --rc geninfo_all_blocks=1 00:09:04.017 --rc geninfo_unexecuted_blocks=1 00:09:04.017 00:09:04.017 ' 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.017 --rc genhtml_branch_coverage=1 00:09:04.017 --rc genhtml_function_coverage=1 00:09:04.017 --rc genhtml_legend=1 00:09:04.017 --rc geninfo_all_blocks=1 00:09:04.017 --rc geninfo_unexecuted_blocks=1 00:09:04.017 00:09:04.017 ' 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.017 --rc genhtml_branch_coverage=1 00:09:04.017 --rc genhtml_function_coverage=1 00:09:04.017 --rc genhtml_legend=1 00:09:04.017 --rc geninfo_all_blocks=1 00:09:04.017 --rc geninfo_unexecuted_blocks=1 00:09:04.017 00:09:04.017 ' 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.017 --rc genhtml_branch_coverage=1 00:09:04.017 --rc genhtml_function_coverage=1 00:09:04.017 --rc genhtml_legend=1 00:09:04.017 --rc geninfo_all_blocks=1 00:09:04.017 --rc geninfo_unexecuted_blocks=1 00:09:04.017 00:09:04.017 ' 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.017 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:04.018 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:06.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:06.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:06.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:06.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:09:06.552 00:09:06.552 --- 10.0.0.2 ping statistics --- 00:09:06.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.552 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:09:06.552 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:09:06.552 00:09:06.552 --- 10.0.0.1 ping statistics --- 00:09:06.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.553 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.553 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4172276 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4172276 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4172276 ']' 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.553 [2024-12-10 22:40:14.053251] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:06.553 [2024-12-10 22:40:14.053338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.553 [2024-12-10 22:40:14.128858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.553 [2024-12-10 22:40:14.188727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.553 [2024-12-10 22:40:14.188768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.553 [2024-12-10 22:40:14.188798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.553 [2024-12-10 22:40:14.188809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.553 [2024-12-10 22:40:14.188818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.553 [2024-12-10 22:40:14.190212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.553 [2024-12-10 22:40:14.190324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.553 [2024-12-10 22:40:14.190413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.553 [2024-12-10 22:40:14.190416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.553 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.811 [2024-12-10 22:40:14.375683] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.811 Malloc0 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.811 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.812 [2024-12-10 22:40:14.428670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4172305 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4172306 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4172309 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.812 { 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme$subsystem", 00:09:06.812 "trtype": "$TEST_TRANSPORT", 00:09:06.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "$NVMF_PORT", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.812 "hdgst": ${hdgst:-false}, 00:09:06.812 "ddgst": ${ddgst:-false} 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 } 00:09:06.812 EOF 00:09:06.812 )") 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4172311 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.812 { 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme$subsystem", 00:09:06.812 "trtype": "$TEST_TRANSPORT", 00:09:06.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "$NVMF_PORT", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.812 "hdgst": ${hdgst:-false}, 00:09:06.812 "ddgst": ${ddgst:-false} 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 } 00:09:06.812 EOF 00:09:06.812 )") 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.812 { 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme$subsystem", 00:09:06.812 "trtype": "$TEST_TRANSPORT", 00:09:06.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "$NVMF_PORT", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.812 "hdgst": ${hdgst:-false}, 00:09:06.812 "ddgst": ${ddgst:-false} 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 } 00:09:06.812 EOF 00:09:06.812 )") 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.812 { 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme$subsystem", 00:09:06.812 "trtype": "$TEST_TRANSPORT", 00:09:06.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "$NVMF_PORT", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.812 "hdgst": ${hdgst:-false}, 00:09:06.812 "ddgst": ${ddgst:-false} 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 } 00:09:06.812 EOF 00:09:06.812 )") 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4172305 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme1", 00:09:06.812 "trtype": "tcp", 00:09:06.812 "traddr": "10.0.0.2", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "4420", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.812 "hdgst": false, 00:09:06.812 "ddgst": false 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 }' 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme1", 00:09:06.812 "trtype": "tcp", 00:09:06.812 "traddr": "10.0.0.2", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "4420", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.812 "hdgst": false, 00:09:06.812 "ddgst": false 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 }' 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme1", 00:09:06.812 "trtype": "tcp", 00:09:06.812 "traddr": "10.0.0.2", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "4420", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.812 "hdgst": false, 00:09:06.812 "ddgst": false 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 }' 00:09:06.812 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.812 "params": { 00:09:06.812 "name": "Nvme1", 00:09:06.812 "trtype": "tcp", 00:09:06.812 "traddr": "10.0.0.2", 00:09:06.812 "adrfam": "ipv4", 00:09:06.812 "trsvcid": "4420", 00:09:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.812 "hdgst": false, 00:09:06.812 "ddgst": false 00:09:06.812 }, 00:09:06.812 "method": "bdev_nvme_attach_controller" 00:09:06.812 }' 00:09:06.812 [2024-12-10 22:40:14.480625] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:06.812 [2024-12-10 22:40:14.480699] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:06.812 [2024-12-10 22:40:14.480717] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:06.812 [2024-12-10 22:40:14.480717] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:06.812 [2024-12-10 22:40:14.480719] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:06.812 [2024-12-10 22:40:14.480802] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 22:40:14.480803] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 22:40:14.480803] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:06.812 --proc-type=auto ] 00:09:06.812 --proc-type=auto ] 00:09:07.070 [2024-12-10 22:40:14.666041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.070 [2024-12-10 22:40:14.722191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:07.070 [2024-12-10 22:40:14.772009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.328 [2024-12-10 22:40:14.826214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:07.328 [2024-12-10 22:40:14.875155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.328 [2024-12-10 22:40:14.931128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:07.328 [2024-12-10 22:40:14.948203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.328 [2024-12-10 22:40:14.997564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:07.585 Running I/O for 1 seconds... 00:09:07.585 Running I/O for 1 seconds... 00:09:07.585 Running I/O for 1 seconds... 00:09:07.585 Running I/O for 1 seconds... 00:09:08.518 11092.00 IOPS, 43.33 MiB/s [2024-12-10T21:40:16.250Z] 5424.00 IOPS, 21.19 MiB/s 00:09:08.518 Latency(us) 00:09:08.518 [2024-12-10T21:40:16.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.518 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:08.518 Nvme1n1 : 1.01 11152.21 43.56 0.00 0.00 11433.58 5898.24 22816.24 00:09:08.518 [2024-12-10T21:40:16.250Z] =================================================================================================================== 00:09:08.518 [2024-12-10T21:40:16.250Z] Total : 11152.21 43.56 0.00 0.00 11433.58 5898.24 22816.24 00:09:08.518 00:09:08.518 Latency(us) 00:09:08.518 [2024-12-10T21:40:16.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.518 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:08.518 Nvme1n1 : 1.02 5435.87 21.23 0.00 0.00 23368.15 9854.67 37282.70 00:09:08.518 [2024-12-10T21:40:16.250Z] =================================================================================================================== 00:09:08.518 [2024-12-10T21:40:16.250Z] Total : 5435.87 21.23 0.00 0.00 23368.15 9854.67 37282.70 00:09:08.776 177264.00 IOPS, 692.44 MiB/s 00:09:08.776 Latency(us) 00:09:08.776 [2024-12-10T21:40:16.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.776 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:08.776 Nvme1n1 : 1.00 176933.17 691.15 0.00 0.00 719.49 288.24 1856.85 00:09:08.776 [2024-12-10T21:40:16.508Z] =================================================================================================================== 00:09:08.776 [2024-12-10T21:40:16.508Z] Total : 176933.17 691.15 0.00 0.00 719.49 288.24 1856.85 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4172306 00:09:08.776 5728.00 IOPS, 22.38 MiB/s 00:09:08.776 Latency(us) 00:09:08.776 [2024-12-10T21:40:16.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.776 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:08.776 Nvme1n1 : 1.01 5834.76 22.79 0.00 0.00 21856.25 5121.52 50875.35 00:09:08.776 [2024-12-10T21:40:16.508Z] =================================================================================================================== 00:09:08.776 [2024-12-10T21:40:16.508Z] Total : 5834.76 22.79 0.00 0.00 21856.25 5121.52 50875.35 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4172309 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4172311 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.776 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.776 rmmod nvme_tcp 00:09:09.034 rmmod nvme_fabrics 00:09:09.034 rmmod nvme_keyring 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4172276 ']' 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4172276 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4172276 ']' 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4172276 00:09:09.034 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:09.035 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.035 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4172276 00:09:09.035 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.035 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.035 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4172276' 00:09:09.035 killing process with pid 4172276 00:09:09.035 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4172276 00:09:09.035 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4172276 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.294 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:11.202 00:09:11.202 real 0m7.332s 00:09:11.202 user 0m16.176s 00:09:11.202 sys 0m3.576s 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.202 ************************************ 00:09:11.202 END TEST nvmf_bdev_io_wait 00:09:11.202 ************************************ 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.202 ************************************ 00:09:11.202 START TEST nvmf_queue_depth 00:09:11.202 ************************************ 00:09:11.202 22:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:11.460 * Looking for test storage... 00:09:11.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.460 22:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:11.460 22:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:11.461 22:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:11.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.461 --rc genhtml_branch_coverage=1 00:09:11.461 --rc genhtml_function_coverage=1 00:09:11.461 --rc genhtml_legend=1 00:09:11.461 --rc geninfo_all_blocks=1 00:09:11.461 --rc geninfo_unexecuted_blocks=1 00:09:11.461 00:09:11.461 ' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:11.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.461 --rc genhtml_branch_coverage=1 00:09:11.461 --rc genhtml_function_coverage=1 00:09:11.461 --rc genhtml_legend=1 00:09:11.461 --rc geninfo_all_blocks=1 00:09:11.461 --rc geninfo_unexecuted_blocks=1 00:09:11.461 00:09:11.461 ' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:11.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.461 --rc genhtml_branch_coverage=1 00:09:11.461 --rc genhtml_function_coverage=1 00:09:11.461 --rc genhtml_legend=1 00:09:11.461 --rc geninfo_all_blocks=1 00:09:11.461 --rc geninfo_unexecuted_blocks=1 00:09:11.461 00:09:11.461 ' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:11.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.461 --rc genhtml_branch_coverage=1 00:09:11.461 --rc genhtml_function_coverage=1 00:09:11.461 --rc genhtml_legend=1 00:09:11.461 --rc geninfo_all_blocks=1 00:09:11.461 --rc geninfo_unexecuted_blocks=1 00:09:11.461 00:09:11.461 ' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.461 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.462 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.997 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:13.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:13.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:13.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:13.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:09:13.998 00:09:13.998 --- 10.0.0.2 ping statistics --- 00:09:13.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.998 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:09:13.998 00:09:13.998 --- 10.0.0.1 ping statistics --- 00:09:13.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.998 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4174543 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4174543 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4174543 ']' 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.998 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.998 [2024-12-10 22:40:21.480560] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:13.999 [2024-12-10 22:40:21.480647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.999 [2024-12-10 22:40:21.556007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.999 [2024-12-10 22:40:21.610497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.999 [2024-12-10 22:40:21.610569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.999 [2024-12-10 22:40:21.610598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.999 [2024-12-10 22:40:21.610609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.999 [2024-12-10 22:40:21.610617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.999 [2024-12-10 22:40:21.611189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.999 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.999 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:13.999 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.999 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.999 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.257 [2024-12-10 22:40:21.750617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.257 Malloc0 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.257 [2024-12-10 22:40:21.799506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4174683 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4174683 /var/tmp/bdevperf.sock 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4174683 ']' 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.257 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.257 [2024-12-10 22:40:21.845242] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:14.257 [2024-12-10 22:40:21.845317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174683 ] 00:09:14.257 [2024-12-10 22:40:21.911560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.257 [2024-12-10 22:40:21.968129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.516 22:40:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.516 22:40:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:14.516 22:40:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:14.516 22:40:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.516 22:40:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 NVMe0n1 00:09:14.774 22:40:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.774 22:40:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:14.774 Running I/O for 10 seconds... 00:09:17.081 8192.00 IOPS, 32.00 MiB/s [2024-12-10T21:40:25.748Z] 8192.50 IOPS, 32.00 MiB/s [2024-12-10T21:40:26.683Z] 8192.33 IOPS, 32.00 MiB/s [2024-12-10T21:40:27.651Z] 8194.00 IOPS, 32.01 MiB/s [2024-12-10T21:40:28.592Z] 8222.00 IOPS, 32.12 MiB/s [2024-12-10T21:40:29.525Z] 8212.17 IOPS, 32.08 MiB/s [2024-12-10T21:40:30.898Z] 8228.00 IOPS, 32.14 MiB/s [2024-12-10T21:40:31.833Z] 8250.38 IOPS, 32.23 MiB/s [2024-12-10T21:40:32.766Z] 8283.22 IOPS, 32.36 MiB/s [2024-12-10T21:40:32.766Z] 8287.20 IOPS, 32.37 MiB/s 00:09:25.034 Latency(us) 00:09:25.034 [2024-12-10T21:40:32.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.034 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:25.034 Verification LBA range: start 0x0 length 0x4000 00:09:25.034 NVMe0n1 : 10.10 8302.87 32.43 0.00 0.00 122845.82 22330.79 73400.32 00:09:25.034 [2024-12-10T21:40:32.766Z] =================================================================================================================== 00:09:25.034 [2024-12-10T21:40:32.766Z] Total : 8302.87 32.43 0.00 0.00 122845.82 22330.79 73400.32 00:09:25.034 { 00:09:25.034 "results": [ 00:09:25.034 { 00:09:25.034 "job": "NVMe0n1", 00:09:25.034 "core_mask": "0x1", 00:09:25.034 "workload": "verify", 00:09:25.034 "status": "finished", 00:09:25.034 "verify_range": { 00:09:25.034 "start": 0, 00:09:25.034 "length": 16384 00:09:25.034 }, 00:09:25.034 "queue_depth": 1024, 00:09:25.034 "io_size": 4096, 00:09:25.034 "runtime": 10.10446, 00:09:25.034 "iops": 8302.868238381863, 00:09:25.034 "mibps": 32.43307905617915, 00:09:25.034 "io_failed": 0, 00:09:25.034 "io_timeout": 0, 00:09:25.034 "avg_latency_us": 122845.8217501033, 00:09:25.034 "min_latency_us": 22330.785185185185, 00:09:25.034 "max_latency_us": 73400.32 00:09:25.034 } 00:09:25.034 ], 00:09:25.034 "core_count": 1 00:09:25.034 } 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4174683 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4174683 ']' 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4174683 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4174683 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4174683' 00:09:25.034 killing process with pid 4174683 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4174683 00:09:25.034 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.034 00:09:25.034 Latency(us) 00:09:25.034 [2024-12-10T21:40:32.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.034 [2024-12-10T21:40:32.766Z] =================================================================================================================== 00:09:25.034 [2024-12-10T21:40:32.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.034 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4174683 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.292 rmmod nvme_tcp 00:09:25.292 rmmod nvme_fabrics 00:09:25.292 rmmod nvme_keyring 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4174543 ']' 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4174543 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4174543 ']' 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4174543 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.292 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4174543 00:09:25.292 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:25.292 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:25.292 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4174543' 00:09:25.292 killing process with pid 4174543 00:09:25.292 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4174543 00:09:25.292 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4174543 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.550 22:40:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.091 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:28.091 00:09:28.091 real 0m16.417s 00:09:28.091 user 0m22.131s 00:09:28.091 sys 0m3.615s 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.092 ************************************ 00:09:28.092 END TEST nvmf_queue_depth 00:09:28.092 ************************************ 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.092 ************************************ 00:09:28.092 START TEST nvmf_target_multipath 00:09:28.092 ************************************ 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:28.092 * Looking for test storage... 00:09:28.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.092 --rc genhtml_branch_coverage=1 00:09:28.092 --rc genhtml_function_coverage=1 00:09:28.092 --rc genhtml_legend=1 00:09:28.092 --rc geninfo_all_blocks=1 00:09:28.092 --rc geninfo_unexecuted_blocks=1 00:09:28.092 00:09:28.092 ' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.092 --rc genhtml_branch_coverage=1 00:09:28.092 --rc genhtml_function_coverage=1 00:09:28.092 --rc genhtml_legend=1 00:09:28.092 --rc geninfo_all_blocks=1 00:09:28.092 --rc geninfo_unexecuted_blocks=1 00:09:28.092 00:09:28.092 ' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.092 --rc genhtml_branch_coverage=1 00:09:28.092 --rc genhtml_function_coverage=1 00:09:28.092 --rc genhtml_legend=1 00:09:28.092 --rc geninfo_all_blocks=1 00:09:28.092 --rc geninfo_unexecuted_blocks=1 00:09:28.092 00:09:28.092 ' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.092 --rc genhtml_branch_coverage=1 00:09:28.092 --rc genhtml_function_coverage=1 00:09:28.092 --rc genhtml_legend=1 00:09:28.092 --rc geninfo_all_blocks=1 00:09:28.092 --rc geninfo_unexecuted_blocks=1 00:09:28.092 00:09:28.092 ' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:28.092 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.093 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.998 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.998 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.998 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.998 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.998 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.998 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:29.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:29.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:29.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:29.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.999 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:09:30.260 00:09:30.260 --- 10.0.0.2 ping statistics --- 00:09:30.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.260 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:09:30.260 00:09:30.260 --- 10.0.0.1 ping statistics --- 00:09:30.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.260 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:30.260 only one NIC for nvmf test 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.260 rmmod nvme_tcp 00:09:30.260 rmmod nvme_fabrics 00:09:30.260 rmmod nvme_keyring 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:30.260 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.261 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.798 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:32.798 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:32.798 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:32.798 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.798 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:32.799 00:09:32.799 real 0m4.598s 00:09:32.799 user 0m0.941s 00:09:32.799 sys 0m1.618s 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:32.799 ************************************ 00:09:32.799 END TEST nvmf_target_multipath 00:09:32.799 ************************************ 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.799 22:40:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.799 ************************************ 00:09:32.799 START TEST nvmf_zcopy 00:09:32.799 ************************************ 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:32.799 * Looking for test storage... 00:09:32.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.799 --rc genhtml_branch_coverage=1 00:09:32.799 --rc genhtml_function_coverage=1 00:09:32.799 --rc genhtml_legend=1 00:09:32.799 --rc geninfo_all_blocks=1 00:09:32.799 --rc geninfo_unexecuted_blocks=1 00:09:32.799 00:09:32.799 ' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.799 --rc genhtml_branch_coverage=1 00:09:32.799 --rc genhtml_function_coverage=1 00:09:32.799 --rc genhtml_legend=1 00:09:32.799 --rc geninfo_all_blocks=1 00:09:32.799 --rc geninfo_unexecuted_blocks=1 00:09:32.799 00:09:32.799 ' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.799 --rc genhtml_branch_coverage=1 00:09:32.799 --rc genhtml_function_coverage=1 00:09:32.799 --rc genhtml_legend=1 00:09:32.799 --rc geninfo_all_blocks=1 00:09:32.799 --rc geninfo_unexecuted_blocks=1 00:09:32.799 00:09:32.799 ' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.799 --rc genhtml_branch_coverage=1 00:09:32.799 --rc genhtml_function_coverage=1 00:09:32.799 --rc genhtml_legend=1 00:09:32.799 --rc geninfo_all_blocks=1 00:09:32.799 --rc geninfo_unexecuted_blocks=1 00:09:32.799 00:09:32.799 ' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.799 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:32.800 22:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:34.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:34.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:34.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.707 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:34.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.708 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:09:34.967 00:09:34.967 --- 10.0.0.2 ping statistics --- 00:09:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.967 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:34.967 00:09:34.967 --- 10.0.0.1 ping statistics --- 00:09:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.967 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4179913 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4179913 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4179913 ']' 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.967 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.967 [2024-12-10 22:40:42.533741] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:34.967 [2024-12-10 22:40:42.533828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.967 [2024-12-10 22:40:42.604283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.967 [2024-12-10 22:40:42.656895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.967 [2024-12-10 22:40:42.656955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.967 [2024-12-10 22:40:42.656984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.967 [2024-12-10 22:40:42.656995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.967 [2024-12-10 22:40:42.657004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.967 [2024-12-10 22:40:42.657646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 [2024-12-10 22:40:42.800878] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 [2024-12-10 22:40:42.817086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 malloc0 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:35.227 { 00:09:35.227 "params": { 00:09:35.227 "name": "Nvme$subsystem", 00:09:35.227 "trtype": "$TEST_TRANSPORT", 00:09:35.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.227 "adrfam": "ipv4", 00:09:35.227 "trsvcid": "$NVMF_PORT", 00:09:35.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.227 "hdgst": ${hdgst:-false}, 00:09:35.227 "ddgst": ${ddgst:-false} 00:09:35.227 }, 00:09:35.227 "method": "bdev_nvme_attach_controller" 00:09:35.227 } 00:09:35.227 EOF 00:09:35.227 )") 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:35.227 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:35.227 "params": { 00:09:35.227 "name": "Nvme1", 00:09:35.227 "trtype": "tcp", 00:09:35.227 "traddr": "10.0.0.2", 00:09:35.227 "adrfam": "ipv4", 00:09:35.227 "trsvcid": "4420", 00:09:35.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.227 "hdgst": false, 00:09:35.227 "ddgst": false 00:09:35.227 }, 00:09:35.227 "method": "bdev_nvme_attach_controller" 00:09:35.227 }' 00:09:35.227 [2024-12-10 22:40:42.905598] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:35.227 [2024-12-10 22:40:42.905685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179940 ] 00:09:35.486 [2024-12-10 22:40:42.979028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.486 [2024-12-10 22:40:43.037568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.743 Running I/O for 10 seconds... 00:09:37.606 5798.00 IOPS, 45.30 MiB/s [2024-12-10T21:40:46.712Z] 5841.00 IOPS, 45.63 MiB/s [2024-12-10T21:40:47.646Z] 5854.33 IOPS, 45.74 MiB/s [2024-12-10T21:40:48.579Z] 5863.00 IOPS, 45.80 MiB/s [2024-12-10T21:40:49.514Z] 5863.40 IOPS, 45.81 MiB/s [2024-12-10T21:40:50.447Z] 5866.67 IOPS, 45.83 MiB/s [2024-12-10T21:40:51.381Z] 5871.00 IOPS, 45.87 MiB/s [2024-12-10T21:40:52.315Z] 5873.12 IOPS, 45.88 MiB/s [2024-12-10T21:40:53.688Z] 5871.44 IOPS, 45.87 MiB/s [2024-12-10T21:40:53.688Z] 5877.00 IOPS, 45.91 MiB/s 00:09:45.956 Latency(us) 00:09:45.956 [2024-12-10T21:40:53.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.956 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:45.956 Verification LBA range: start 0x0 length 0x1000 00:09:45.956 Nvme1n1 : 10.01 5876.63 45.91 0.00 0.00 21719.64 3325.35 31651.46 00:09:45.956 [2024-12-10T21:40:53.688Z] =================================================================================================================== 00:09:45.956 [2024-12-10T21:40:53.688Z] Total : 5876.63 45.91 0.00 0.00 21719.64 3325.35 31651.46 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4181253 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:45.956 { 00:09:45.956 "params": { 00:09:45.956 "name": "Nvme$subsystem", 00:09:45.956 "trtype": "$TEST_TRANSPORT", 00:09:45.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.956 "adrfam": "ipv4", 00:09:45.956 "trsvcid": "$NVMF_PORT", 00:09:45.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.956 "hdgst": ${hdgst:-false}, 00:09:45.956 "ddgst": ${ddgst:-false} 00:09:45.956 }, 00:09:45.956 "method": "bdev_nvme_attach_controller" 00:09:45.956 } 00:09:45.956 EOF 00:09:45.956 )") 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:45.956 [2024-12-10 22:40:53.532954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.956 [2024-12-10 22:40:53.532994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:45.956 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:45.956 "params": { 00:09:45.956 "name": "Nvme1", 00:09:45.956 "trtype": "tcp", 00:09:45.956 "traddr": "10.0.0.2", 00:09:45.956 "adrfam": "ipv4", 00:09:45.956 "trsvcid": "4420", 00:09:45.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.956 "hdgst": false, 00:09:45.957 "ddgst": false 00:09:45.957 }, 00:09:45.957 "method": "bdev_nvme_attach_controller" 00:09:45.957 }' 00:09:45.957 [2024-12-10 22:40:53.540904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.540926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.548909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.548929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.556925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.556945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.564945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.564965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.572975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.572994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.575509] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:45.957 [2024-12-10 22:40:53.575608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4181253 ] 00:09:45.957 [2024-12-10 22:40:53.580997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.581017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.589019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.589038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.597041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.597060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.605066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.605093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.613090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.613110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.621109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.621129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.629131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.629150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.637154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.637174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.645137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.957 [2024-12-10 22:40:53.645192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.645211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.653226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.653259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.661255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.661293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.669240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.669259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.677262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.677281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.957 [2024-12-10 22:40:53.685282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.957 [2024-12-10 22:40:53.685300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.693333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.693353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.701324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.701343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.706757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.216 [2024-12-10 22:40:53.709346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.709366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.717372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.717392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.725419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.725453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.733447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.733483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.741471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.741508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.749496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.749565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.757512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.757572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.765558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.765608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.773560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.773602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.781576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.781617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.789642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.789678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.797656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.797694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.805664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.805697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.813661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.813682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.821682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.821703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.829715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.829739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.837734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.837758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.845755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.845777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.853782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.853807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.861802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.861826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.869846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.869869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.877847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.877870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.885885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.885924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.893915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.893936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 Running I/O for 5 seconds... 00:09:46.216 [2024-12-10 22:40:53.901946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.901974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.915633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.915663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.928367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.928397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-10 22:40:53.938895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-10 22:40:53.938924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:53.949647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:53.949684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:53.962719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:53.962747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:53.972820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:53.972847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:53.984016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:53.984043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:53.994449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:53.994476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.005111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.005139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.015748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.015775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.027760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.027789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.037784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.037816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.048226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.048253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.058800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.058828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.069738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.069766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.080883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.080911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.091365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.091392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.104019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.104046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.114044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.114081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.124568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.124595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.135198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.135225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.147500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.147527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.157440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.157467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.167787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.167814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-12-10 22:40:54.180007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-12-10 22:40:54.180034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.475 [2024-12-10 22:40:54.189668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.475 [2024-12-10 22:40:54.189695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.475 [2024-12-10 22:40:54.200008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.475 [2024-12-10 22:40:54.200034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.210177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.210219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.220764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.220790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.231276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.231303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.241985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.242012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.252629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.252656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.264916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.264944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.274145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.274173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.286556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.286583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.296582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.296608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.307243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.307271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.319931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.319973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.329513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.329540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.340714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.340742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.351201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.351229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.361304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.361331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.371729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.371755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.382406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.382434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.393513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.393540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.404135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.404162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.414659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.414686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.425378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.425405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.435655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.435682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.446047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.446074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.733 [2024-12-10 22:40:54.456715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.733 [2024-12-10 22:40:54.456743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.991 [2024-12-10 22:40:54.467180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.991 [2024-12-10 22:40:54.467208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.991 [2024-12-10 22:40:54.477851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.991 [2024-12-10 22:40:54.477879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.991 [2024-12-10 22:40:54.490912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.991 [2024-12-10 22:40:54.490939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.991 [2024-12-10 22:40:54.501250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.991 [2024-12-10 22:40:54.501277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.991 [2024-12-10 22:40:54.512036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.991 [2024-12-10 22:40:54.512063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.991 [2024-12-10 22:40:54.524508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.524536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.537023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.537050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.546408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.546436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.557505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.557533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.570151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.570179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.582257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.582285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.592153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.592180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.603513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.603541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.614253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.614280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.624960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.624988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.635374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.635401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.646043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.646071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.656630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.656658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.669193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.669221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.679334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.679362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.689886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.689914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.702500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.702527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-12-10 22:40:54.712569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-12-10 22:40:54.712596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.723375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.723405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.733966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.733993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.744883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.744910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.757392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.757420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.767575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.767603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.778117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.778144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.789116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.789144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.799538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.799573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.809888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.809916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.820441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.820468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.830983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.831011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.841558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.841585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.852027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.852054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.862717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.862745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.873280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.873307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.883720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.883748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.894392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.894420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 11935.00 IOPS, 93.24 MiB/s [2024-12-10T21:40:54.982Z] [2024-12-10 22:40:54.905168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.905196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.917598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.917625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.927757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.927794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.938302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.938330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.950773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.950800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.961065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.961093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.250 [2024-12-10 22:40:54.971159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.250 [2024-12-10 22:40:54.971186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:54.981370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:54.981397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:54.991939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:54.991966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.002079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.002106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.012700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.012728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.025271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.025298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.036854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.036882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.045405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.045434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.058129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.058156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.068272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.068299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.078462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.078489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.088911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.088938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.099498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.099525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.110029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.110056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.120205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.120233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.130503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.130539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.140977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.141004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.151631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.151659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.162092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.162119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.172446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.172473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.182812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.182840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.193300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.193328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.203912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.203939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.214159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.214186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.509 [2024-12-10 22:40:55.224708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.509 [2024-12-10 22:40:55.224735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.510 [2024-12-10 22:40:55.234863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.510 [2024-12-10 22:40:55.234891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.245522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.245559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.258280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.258307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.270059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.270085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.279142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.279169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.290540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.290576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.302934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.302961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.312924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.312951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.323361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.323387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.333898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.333936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.346516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.767 [2024-12-10 22:40:55.346552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.767 [2024-12-10 22:40:55.356607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.356647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.367123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.367150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.378000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.378027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.388657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.388684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.401768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.401796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.411291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.411317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.421784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.421811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.432143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.432170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.442714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.442742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.453207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.453234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.464019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.464046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.475089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.475117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.768 [2024-12-10 22:40:55.489221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.768 [2024-12-10 22:40:55.489249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.499501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.499528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.510014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.510042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.520760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.520788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.533435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.533463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.545421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.545459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.554318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.554346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.565204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.565231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.575812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.575839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.589488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.589515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.599876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.599903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.610725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.610752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.623101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.623129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.632908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.632935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.643442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.643471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.653810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.653837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.664227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.664256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.674789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.674825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.685627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.685654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.698627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.027 [2024-12-10 22:40:55.698655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.027 [2024-12-10 22:40:55.708288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.028 [2024-12-10 22:40:55.708314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.028 [2024-12-10 22:40:55.719026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.028 [2024-12-10 22:40:55.719054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.028 [2024-12-10 22:40:55.729788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.028 [2024-12-10 22:40:55.729816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.028 [2024-12-10 22:40:55.740165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.028 [2024-12-10 22:40:55.740192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.028 [2024-12-10 22:40:55.750760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.028 [2024-12-10 22:40:55.750788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.308 [2024-12-10 22:40:55.761424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.308 [2024-12-10 22:40:55.761453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.308 [2024-12-10 22:40:55.771898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.308 [2024-12-10 22:40:55.771940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.782807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.782835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.793756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.793783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.804007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.804034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.814167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.814194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.824885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.824912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.835239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.835266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.845655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.845688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.856483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.856510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.866868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.866895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.877112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.877140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.887518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.887555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.898242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.898270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 12021.50 IOPS, 93.92 MiB/s [2024-12-10T21:40:56.041Z] [2024-12-10 22:40:55.908714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.908750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.919666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.919698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.930161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.930188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.940769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.940796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.953180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.953206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.963063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.963089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.974286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.974313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.984901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.984928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:55.995572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:55.995599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:56.007949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:56.007976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.309 [2024-12-10 22:40:56.017645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.309 [2024-12-10 22:40:56.017673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.572 [2024-12-10 22:40:56.028723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.572 [2024-12-10 22:40:56.028750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.572 [2024-12-10 22:40:56.039812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.572 [2024-12-10 22:40:56.039839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.572 [2024-12-10 22:40:56.052295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.572 [2024-12-10 22:40:56.052322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.572 [2024-12-10 22:40:56.062722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.572 [2024-12-10 22:40:56.062749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.572 [2024-12-10 22:40:56.073528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.073563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.083836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.083864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.094909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.094935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.107611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.107638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.117647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.117674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.127821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.127848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.138766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.138793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.151490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.151528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.163147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.163173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.172588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.172615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.183117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.183144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.195096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.195123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.205130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.205158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.215784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.215811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.226509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.226536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.237252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.237280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.247983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.248011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.258954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.258981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.271417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.271444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.280612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.280639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.573 [2024-12-10 22:40:56.292519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.573 [2024-12-10 22:40:56.292555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.305430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.305457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.315415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.315441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.325756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.325784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.336651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.336678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.347250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.347277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.359754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.359790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.369494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.369521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.379886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.379913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.390463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.390490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.402802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.402829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.412932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.412959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.423583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.423610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.433962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.433989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.444728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.444755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.457149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.457176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.467379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.467422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.478084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.831 [2024-12-10 22:40:56.478125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.831 [2024-12-10 22:40:56.488760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.832 [2024-12-10 22:40:56.488787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.832 [2024-12-10 22:40:56.499274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.832 [2024-12-10 22:40:56.499300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.832 [2024-12-10 22:40:56.512104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.832 [2024-12-10 22:40:56.512132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.832 [2024-12-10 22:40:56.522036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.832 [2024-12-10 22:40:56.522063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.832 [2024-12-10 22:40:56.532708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.832 [2024-12-10 22:40:56.532735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.832 [2024-12-10 22:40:56.546041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.832 [2024-12-10 22:40:56.546068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.832 [2024-12-10 22:40:56.556160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.832 [2024-12-10 22:40:56.556187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.089 [2024-12-10 22:40:56.566804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.089 [2024-12-10 22:40:56.566842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.089 [2024-12-10 22:40:56.580706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.089 [2024-12-10 22:40:56.580733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.089 [2024-12-10 22:40:56.592373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.089 [2024-12-10 22:40:56.592400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.089 [2024-12-10 22:40:56.601199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.089 [2024-12-10 22:40:56.601225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.089 [2024-12-10 22:40:56.613095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.089 [2024-12-10 22:40:56.613122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.089 [2024-12-10 22:40:56.623611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.089 [2024-12-10 22:40:56.623638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.634105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.634132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.646198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.646225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.655802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.655829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.666310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.666337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.677131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.677159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.687493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.687520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.698023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.698050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.708468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.708495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.718997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.719025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.729256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.729283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.739434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.739461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.749775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.749803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.760178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.760204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.770802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.770839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.784465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.784493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.794561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.794598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.804849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.804876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.090 [2024-12-10 22:40:56.815477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.090 [2024-12-10 22:40:56.815505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.347 [2024-12-10 22:40:56.826247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.347 [2024-12-10 22:40:56.826274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.347 [2024-12-10 22:40:56.839483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.347 [2024-12-10 22:40:56.839510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.849912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.849939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.860519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.860555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.870975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.871002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.881654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.881681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.893855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.893882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.903606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.903634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 12008.33 IOPS, 93.82 MiB/s [2024-12-10T21:40:57.080Z] [2024-12-10 22:40:56.914197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.914225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.924658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.924686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.935156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.935183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.945907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.945934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.956859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.956886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.969681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.969708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.979791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.979818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:56.990678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:56.990704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.003328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.003356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.013306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.013332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.023653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.023680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.033888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.033916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.044482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.044510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.054965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.054993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.065727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.065754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.348 [2024-12-10 22:40:57.076362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.348 [2024-12-10 22:40:57.076388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.088926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.088953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.099273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.099300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.109964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.109991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.122648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.122675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.132876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.132903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.143298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.143324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.153523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.153558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.164175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.164201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.174765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.174792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.185368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.185395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.606 [2024-12-10 22:40:57.198540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.606 [2024-12-10 22:40:57.198575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.208729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.208756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.218950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.218977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.228930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.228971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.239280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.239307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.249392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.249419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.259622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.259649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.269719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.269746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.279982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.280009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.290332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.290358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.300913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.300940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.313130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.313158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.322192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.322219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.607 [2024-12-10 22:40:57.332653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.607 [2024-12-10 22:40:57.332696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.342913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.342940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.353626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.353653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.366344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.366371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.378062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.378089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.386837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.386865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.399752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.399779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.409904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.409932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.420486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.420513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.430706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.430733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.441211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.441239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.453900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.453927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.463690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.463717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.474197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.474224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.486824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.486851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.496699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.496726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.507383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.507410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.518529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.518565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.528860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.528887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.539869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.539896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.550570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.550598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.564049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.564076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.574333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.574360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.865 [2024-12-10 22:40:57.585061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.865 [2024-12-10 22:40:57.585098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.597567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.597595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.607802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.607829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.618624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.618651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.632354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.632381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.642560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.642587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.653088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.653116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.663371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.663398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.673870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.673897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.684196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.684223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.694610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.694637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.705308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.705335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.718058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.718085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.727912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.727940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.738410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.123 [2024-12-10 22:40:57.738437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.123 [2024-12-10 22:40:57.749142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.749169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.761459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.761486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.771537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.771573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.781625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.781652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.792103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.792140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.802430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.802456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.812791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.812818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.822929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.822956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.833561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.833587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.124 [2024-12-10 22:40:57.843731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.124 [2024-12-10 22:40:57.843757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.381 [2024-12-10 22:40:57.854263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.381 [2024-12-10 22:40:57.854289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.381 [2024-12-10 22:40:57.864841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.381 [2024-12-10 22:40:57.864867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.381 [2024-12-10 22:40:57.875342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.381 [2024-12-10 22:40:57.875369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.381 [2024-12-10 22:40:57.885804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.381 [2024-12-10 22:40:57.885830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.896393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.896420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.906856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.906883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 12031.75 IOPS, 94.00 MiB/s [2024-12-10T21:40:58.114Z] [2024-12-10 22:40:57.917350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.917378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.928177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.928205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.938815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.938843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.951425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.951453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.961664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.961692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.972212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.972239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.982839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.982866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:57.993829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:57.993865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.005934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.005962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.015793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.015831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.026492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.026519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.037013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.037041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.047494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.047527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.058204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.058232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.068688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.068716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.081036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.081063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.091177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.091204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.382 [2024-12-10 22:40:58.101788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.382 [2024-12-10 22:40:58.101815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.112241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.112268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.122892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.122919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.133576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.133603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.146856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.146883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.157222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.157249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.168073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.168099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.180912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.180940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.191177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.191204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.201482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.201509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.211927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.211954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.222293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.222319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.232423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.232451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.242717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.242744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.253158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.253184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.263385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.263426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.273693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.273719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.284144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.284173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.294814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.294841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.308648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.308675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.318493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.318520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.328763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.328790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.339039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.339066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.349440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.349468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.359721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.359749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.640 [2024-12-10 22:40:58.370156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.640 [2024-12-10 22:40:58.370183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.380541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.380594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.391128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.391155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.401224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.401251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.412085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.412112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.424389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.424416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.434318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.434345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.444434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.444461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.455461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.455489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.466052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.466079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.476856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.476883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.489091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.489118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.499329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.499357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.509803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.509831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.522447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.522474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.534993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.535021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.544825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.544853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.554981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.555008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.565497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.565532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.578194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.578222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.588212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.588239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.598542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.898 [2024-12-10 22:40:58.598598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.898 [2024-12-10 22:40:58.609003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.899 [2024-12-10 22:40:58.609030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.899 [2024-12-10 22:40:58.619336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.899 [2024-12-10 22:40:58.619363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.629472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.629500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.640281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.640308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.650893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.650920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.661285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.661312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.672016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.672043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.682830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.682857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.693481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.693508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.704307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.704334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.720796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.720825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.732659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.732686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.741806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.741833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.753589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.753617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.763789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.763817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.774124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.774151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.786736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.786763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.796830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.796857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.157 [2024-12-10 22:40:58.807251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.157 [2024-12-10 22:40:58.807278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.158 [2024-12-10 22:40:58.817437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.158 [2024-12-10 22:40:58.817465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.158 [2024-12-10 22:40:58.827836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.158 [2024-12-10 22:40:58.827864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.158 [2024-12-10 22:40:58.838139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.158 [2024-12-10 22:40:58.838166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.158 [2024-12-10 22:40:58.848220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.158 [2024-12-10 22:40:58.848247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.158 [2024-12-10 22:40:58.858434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.158 [2024-12-10 22:40:58.858462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.158 [2024-12-10 22:40:58.869199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.158 [2024-12-10 22:40:58.869226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.158 [2024-12-10 22:40:58.879601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.158 [2024-12-10 22:40:58.879628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.890159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.890186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.900679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.900707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.911634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.911676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 12050.60 IOPS, 94.15 MiB/s [2024-12-10T21:40:59.148Z] [2024-12-10 22:40:58.921568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.921596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 00:09:51.416 Latency(us) 00:09:51.416 [2024-12-10T21:40:59.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.416 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:51.416 Nvme1n1 : 5.01 12051.25 94.15 0.00 0.00 10607.75 4611.79 22039.51 00:09:51.416 [2024-12-10T21:40:59.148Z] =================================================================================================================== 00:09:51.416 [2024-12-10T21:40:59.148Z] Total : 12051.25 94.15 0.00 0.00 10607.75 4611.79 22039.51 00:09:51.416 [2024-12-10 22:40:58.927444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.927467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.935448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.935471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.943469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.943491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.951564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.951625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.959584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.959654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.967606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.967651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.975628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.975672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.983654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.983710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.991679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.991729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:58.999691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:58.999739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.007710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.007759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.015732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.015782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.023762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.023814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.031783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.031835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.039806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.039857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.047819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.047866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.055843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.055891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.063858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.063901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.071881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.071920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.079867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.079903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.087903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.087924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.095920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.095940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.103913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.103935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.112006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.112069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.120009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.120056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.127996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.128017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.136013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.136033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.416 [2024-12-10 22:40:59.144034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.416 [2024-12-10 22:40:59.144054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4181253) - No such process 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4181253 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.674 delay0 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.674 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:51.674 [2024-12-10 22:40:59.270381] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:58.230 [2024-12-10 22:41:05.463137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab9e30 is same with the state(6) to be set 00:09:58.230 [2024-12-10 22:41:05.463223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab9e30 is same with the state(6) to be set 00:09:58.230 Initializing NVMe Controllers 00:09:58.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:58.230 Initialization complete. Launching workers. 00:09:58.230 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 742 00:09:58.230 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1028, failed to submit 34 00:09:58.230 success 850, unsuccessful 178, failed 0 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.230 rmmod nvme_tcp 00:09:58.230 rmmod nvme_fabrics 00:09:58.230 rmmod nvme_keyring 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4179913 ']' 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4179913 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4179913 ']' 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4179913 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4179913 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4179913' 00:09:58.230 killing process with pid 4179913 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4179913 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4179913 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.230 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.140 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.140 00:10:00.140 real 0m27.844s 00:10:00.140 user 0m41.138s 00:10:00.140 sys 0m8.156s 00:10:00.140 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.140 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.140 ************************************ 00:10:00.140 END TEST nvmf_zcopy 00:10:00.140 ************************************ 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.399 ************************************ 00:10:00.399 START TEST nvmf_nmic 00:10:00.399 ************************************ 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.399 * Looking for test storage... 00:10:00.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:00.399 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.399 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:00.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.400 --rc genhtml_branch_coverage=1 00:10:00.400 --rc genhtml_function_coverage=1 00:10:00.400 --rc genhtml_legend=1 00:10:00.400 --rc geninfo_all_blocks=1 00:10:00.400 --rc geninfo_unexecuted_blocks=1 00:10:00.400 00:10:00.400 ' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:00.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.400 --rc genhtml_branch_coverage=1 00:10:00.400 --rc genhtml_function_coverage=1 00:10:00.400 --rc genhtml_legend=1 00:10:00.400 --rc geninfo_all_blocks=1 00:10:00.400 --rc geninfo_unexecuted_blocks=1 00:10:00.400 00:10:00.400 ' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:00.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.400 --rc genhtml_branch_coverage=1 00:10:00.400 --rc genhtml_function_coverage=1 00:10:00.400 --rc genhtml_legend=1 00:10:00.400 --rc geninfo_all_blocks=1 00:10:00.400 --rc geninfo_unexecuted_blocks=1 00:10:00.400 00:10:00.400 ' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:00.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.400 --rc genhtml_branch_coverage=1 00:10:00.400 --rc genhtml_function_coverage=1 00:10:00.400 --rc genhtml_legend=1 00:10:00.400 --rc geninfo_all_blocks=1 00:10:00.400 --rc geninfo_unexecuted_blocks=1 00:10:00.400 00:10:00.400 ' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.400 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.401 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.401 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.401 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.401 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.401 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.401 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.931 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.931 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.931 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.931 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:02.932 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:02.932 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:02.932 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:02.932 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:10:02.932 00:10:02.932 --- 10.0.0.2 ping statistics --- 00:10:02.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.932 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:10:02.932 00:10:02.932 --- 10.0.0.1 ping statistics --- 00:10:02.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.932 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:02.932 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4184552 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4184552 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4184552 ']' 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.933 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.933 [2024-12-10 22:41:10.539400] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:02.933 [2024-12-10 22:41:10.539493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.933 [2024-12-10 22:41:10.615728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.191 [2024-12-10 22:41:10.674861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.191 [2024-12-10 22:41:10.674921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.191 [2024-12-10 22:41:10.674933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.191 [2024-12-10 22:41:10.674944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.192 [2024-12-10 22:41:10.674968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.192 [2024-12-10 22:41:10.676498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.192 [2024-12-10 22:41:10.676630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.192 [2024-12-10 22:41:10.676658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.192 [2024-12-10 22:41:10.676661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 [2024-12-10 22:41:10.817676] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 Malloc0 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 [2024-12-10 22:41:10.876416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:03.192 test case1: single bdev can't be used in multiple subsystems 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 [2024-12-10 22:41:10.900275] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:03.192 [2024-12-10 22:41:10.900304] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:03.192 [2024-12-10 22:41:10.900333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.192 request: 00:10:03.192 { 00:10:03.192 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:03.192 "namespace": { 00:10:03.192 "bdev_name": "Malloc0", 00:10:03.192 "no_auto_visible": false, 00:10:03.192 "hide_metadata": false 00:10:03.192 }, 00:10:03.192 "method": "nvmf_subsystem_add_ns", 00:10:03.192 "req_id": 1 00:10:03.192 } 00:10:03.192 Got JSON-RPC error response 00:10:03.192 response: 00:10:03.192 { 00:10:03.192 "code": -32602, 00:10:03.192 "message": "Invalid parameters" 00:10:03.192 } 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:03.192 Adding namespace failed - expected result. 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:03.192 test case2: host connect to nvmf target in multiple paths 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 [2024-12-10 22:41:10.908380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.192 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.127 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:04.385 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.385 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:04.385 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.385 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:04.385 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:06.911 22:41:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:06.911 22:41:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:06.911 22:41:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.911 22:41:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:06.911 22:41:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.911 22:41:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:06.911 22:41:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:06.911 [global] 00:10:06.911 thread=1 00:10:06.911 invalidate=1 00:10:06.911 rw=write 00:10:06.911 time_based=1 00:10:06.911 runtime=1 00:10:06.911 ioengine=libaio 00:10:06.911 direct=1 00:10:06.911 bs=4096 00:10:06.911 iodepth=1 00:10:06.911 norandommap=0 00:10:06.911 numjobs=1 00:10:06.911 00:10:06.911 verify_dump=1 00:10:06.911 verify_backlog=512 00:10:06.911 verify_state_save=0 00:10:06.911 do_verify=1 00:10:06.911 verify=crc32c-intel 00:10:06.911 [job0] 00:10:06.911 filename=/dev/nvme0n1 00:10:06.911 Could not set queue depth (nvme0n1) 00:10:06.911 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.911 fio-3.35 00:10:06.911 Starting 1 thread 00:10:07.851 00:10:07.851 job0: (groupid=0, jobs=1): err= 0: pid=4185179: Tue Dec 10 22:41:15 2024 00:10:07.851 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:10:07.851 slat (nsec): min=9305, max=49904, avg=27128.50, stdev=10487.30 00:10:07.851 clat (usec): min=40848, max=42149, avg=41429.31, stdev=517.26 00:10:07.851 lat (usec): min=40881, max=42166, avg=41456.43, stdev=521.66 00:10:07.851 clat percentiles (usec): 00:10:07.851 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:07.851 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:07.851 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:07.851 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:07.851 | 99.99th=[42206] 00:10:07.851 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:07.851 slat (nsec): min=8457, max=29665, avg=9539.62, stdev=1595.46 00:10:07.851 clat (usec): min=145, max=272, avg=160.91, stdev=10.89 00:10:07.851 lat (usec): min=154, max=301, avg=170.45, stdev=11.45 00:10:07.851 clat percentiles (usec): 00:10:07.851 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 151], 20.00th=[ 153], 00:10:07.851 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:10:07.851 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 178], 00:10:07.851 | 99.00th=[ 190], 99.50th=[ 225], 99.90th=[ 273], 99.95th=[ 273], 00:10:07.851 | 99.99th=[ 273] 00:10:07.851 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:07.851 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:07.851 lat (usec) : 250=95.69%, 500=0.19% 00:10:07.851 lat (msec) : 50=4.12% 00:10:07.851 cpu : usr=0.20%, sys=0.80%, ctx=534, majf=0, minf=1 00:10:07.851 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.851 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.851 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.851 00:10:07.851 Run status group 0 (all jobs): 00:10:07.851 READ: bw=87.9KiB/s (90.0kB/s), 87.9KiB/s-87.9KiB/s (90.0kB/s-90.0kB/s), io=88.0KiB (90.1kB), run=1001-1001msec 00:10:07.851 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:07.851 00:10:07.851 Disk stats (read/write): 00:10:07.851 nvme0n1: ios=69/512, merge=0/0, ticks=795/77, in_queue=872, util=91.38% 00:10:07.851 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:07.851 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.851 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:07.851 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:07.851 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.109 rmmod nvme_tcp 00:10:08.109 rmmod nvme_fabrics 00:10:08.109 rmmod nvme_keyring 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4184552 ']' 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4184552 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4184552 ']' 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4184552 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4184552 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4184552' 00:10:08.109 killing process with pid 4184552 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4184552 00:10:08.109 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4184552 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.369 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.281 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.281 00:10:10.281 real 0m10.067s 00:10:10.281 user 0m22.033s 00:10:10.281 sys 0m2.529s 00:10:10.281 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.281 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.281 ************************************ 00:10:10.281 END TEST nvmf_nmic 00:10:10.281 ************************************ 00:10:10.281 22:41:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:10.281 22:41:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.281 22:41:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.281 22:41:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.541 ************************************ 00:10:10.541 START TEST nvmf_fio_target 00:10:10.541 ************************************ 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:10.541 * Looking for test storage... 00:10:10.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.541 --rc genhtml_branch_coverage=1 00:10:10.541 --rc genhtml_function_coverage=1 00:10:10.541 --rc genhtml_legend=1 00:10:10.541 --rc geninfo_all_blocks=1 00:10:10.541 --rc geninfo_unexecuted_blocks=1 00:10:10.541 00:10:10.541 ' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.541 --rc genhtml_branch_coverage=1 00:10:10.541 --rc genhtml_function_coverage=1 00:10:10.541 --rc genhtml_legend=1 00:10:10.541 --rc geninfo_all_blocks=1 00:10:10.541 --rc geninfo_unexecuted_blocks=1 00:10:10.541 00:10:10.541 ' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.541 --rc genhtml_branch_coverage=1 00:10:10.541 --rc genhtml_function_coverage=1 00:10:10.541 --rc genhtml_legend=1 00:10:10.541 --rc geninfo_all_blocks=1 00:10:10.541 --rc geninfo_unexecuted_blocks=1 00:10:10.541 00:10:10.541 ' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.541 --rc genhtml_branch_coverage=1 00:10:10.541 --rc genhtml_function_coverage=1 00:10:10.541 --rc genhtml_legend=1 00:10:10.541 --rc geninfo_all_blocks=1 00:10:10.541 --rc geninfo_unexecuted_blocks=1 00:10:10.541 00:10:10.541 ' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.541 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.542 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.071 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.071 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.071 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.071 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.071 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:13.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:13.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:13.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:13.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:10:13.072 00:10:13.072 --- 10.0.0.2 ping statistics --- 00:10:13.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.072 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:10:13.072 00:10:13.072 --- 10.0.0.1 ping statistics --- 00:10:13.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.072 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.072 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4187270 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4187270 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4187270 ']' 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.073 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.073 [2024-12-10 22:41:20.552326] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:13.073 [2024-12-10 22:41:20.552423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.073 [2024-12-10 22:41:20.626680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.073 [2024-12-10 22:41:20.688504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.073 [2024-12-10 22:41:20.688574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.073 [2024-12-10 22:41:20.688604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.073 [2024-12-10 22:41:20.688615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.073 [2024-12-10 22:41:20.688625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.073 [2024-12-10 22:41:20.690203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.073 [2024-12-10 22:41:20.690261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.073 [2024-12-10 22:41:20.690330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.073 [2024-12-10 22:41:20.690332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.330 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.330 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:13.330 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.330 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.330 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.330 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.330 22:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.588 [2024-12-10 22:41:21.081914] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.588 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.846 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:13.846 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.104 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:14.104 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.362 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:14.362 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.620 22:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:14.620 22:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:14.877 22:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.135 22:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:15.135 22:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.393 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:15.393 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.959 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:15.959 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:15.959 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.216 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:16.216 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.782 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:16.782 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:17.040 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.040 [2024-12-10 22:41:24.753790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.297 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:17.554 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:17.811 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.377 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:18.377 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:18.377 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.377 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:18.377 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:18.377 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:20.323 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:20.323 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:20.323 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.586 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:20.586 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.586 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:20.586 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:20.586 [global] 00:10:20.586 thread=1 00:10:20.586 invalidate=1 00:10:20.586 rw=write 00:10:20.586 time_based=1 00:10:20.586 runtime=1 00:10:20.586 ioengine=libaio 00:10:20.586 direct=1 00:10:20.586 bs=4096 00:10:20.586 iodepth=1 00:10:20.586 norandommap=0 00:10:20.586 numjobs=1 00:10:20.586 00:10:20.586 verify_dump=1 00:10:20.586 verify_backlog=512 00:10:20.586 verify_state_save=0 00:10:20.586 do_verify=1 00:10:20.586 verify=crc32c-intel 00:10:20.586 [job0] 00:10:20.586 filename=/dev/nvme0n1 00:10:20.586 [job1] 00:10:20.586 filename=/dev/nvme0n2 00:10:20.586 [job2] 00:10:20.586 filename=/dev/nvme0n3 00:10:20.586 [job3] 00:10:20.586 filename=/dev/nvme0n4 00:10:20.586 Could not set queue depth (nvme0n1) 00:10:20.586 Could not set queue depth (nvme0n2) 00:10:20.586 Could not set queue depth (nvme0n3) 00:10:20.586 Could not set queue depth (nvme0n4) 00:10:20.586 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.586 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.586 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.586 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.586 fio-3.35 00:10:20.586 Starting 4 threads 00:10:21.960 00:10:21.960 job0: (groupid=0, jobs=1): err= 0: pid=4188347: Tue Dec 10 22:41:29 2024 00:10:21.960 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:10:21.960 slat (nsec): min=12633, max=35997, avg=27067.82, stdev=8710.71 00:10:21.960 clat (usec): min=28386, max=41005, avg=40382.72, stdev=2679.79 00:10:21.960 lat (usec): min=28402, max=41022, avg=40409.79, stdev=2682.23 00:10:21.960 clat percentiles (usec): 00:10:21.960 | 1.00th=[28443], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:21.960 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:21.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:21.960 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:21.960 | 99.99th=[41157] 00:10:21.960 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:21.960 slat (nsec): min=7438, max=42448, avg=18953.40, stdev=5968.40 00:10:21.960 clat (usec): min=156, max=447, avg=199.91, stdev=34.32 00:10:21.960 lat (usec): min=166, max=463, avg=218.86, stdev=34.40 00:10:21.960 clat percentiles (usec): 00:10:21.960 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:10:21.960 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:10:21.960 | 70.00th=[ 200], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 255], 00:10:21.960 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 449], 99.95th=[ 449], 00:10:21.960 | 99.99th=[ 449] 00:10:21.960 bw ( KiB/s): min= 4096, max= 4096, per=15.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.960 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.960 lat (usec) : 250=90.64%, 500=5.24% 00:10:21.960 lat (msec) : 50=4.12% 00:10:21.960 cpu : usr=0.80%, sys=1.20%, ctx=534, majf=0, minf=1 00:10:21.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.960 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.960 job1: (groupid=0, jobs=1): err= 0: pid=4188348: Tue Dec 10 22:41:29 2024 00:10:21.960 read: IOPS=2301, BW=9207KiB/s (9428kB/s)(9216KiB/1001msec) 00:10:21.960 slat (nsec): min=4323, max=55923, avg=11353.72, stdev=5639.15 00:10:21.960 clat (usec): min=170, max=497, avg=217.51, stdev=42.80 00:10:21.960 lat (usec): min=182, max=514, avg=228.87, stdev=45.59 00:10:21.960 clat percentiles (usec): 00:10:21.960 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:10:21.960 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:10:21.960 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 334], 00:10:21.960 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 469], 99.95th=[ 478], 00:10:21.960 | 99.99th=[ 498] 00:10:21.960 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:21.960 slat (nsec): min=5775, max=44249, avg=13928.91, stdev=3848.71 00:10:21.960 clat (usec): min=128, max=400, avg=163.84, stdev=20.72 00:10:21.960 lat (usec): min=140, max=423, avg=177.76, stdev=21.94 00:10:21.960 clat percentiles (usec): 00:10:21.960 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:10:21.960 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:10:21.960 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 208], 00:10:21.960 | 99.00th=[ 231], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 310], 00:10:21.960 | 99.99th=[ 400] 00:10:21.960 bw ( KiB/s): min=12000, max=12000, per=45.56%, avg=12000.00, stdev= 0.00, samples=1 00:10:21.960 iops : min= 3000, max= 3000, avg=3000.00, stdev= 0.00, samples=1 00:10:21.960 lat (usec) : 250=95.91%, 500=4.09% 00:10:21.960 cpu : usr=3.60%, sys=6.40%, ctx=4865, majf=0, minf=1 00:10:21.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.960 issued rwts: total=2304,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.960 job2: (groupid=0, jobs=1): err= 0: pid=4188349: Tue Dec 10 22:41:29 2024 00:10:21.960 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:21.960 slat (nsec): min=5868, max=69621, avg=22409.53, stdev=10458.64 00:10:21.960 clat (usec): min=208, max=41111, avg=366.20, stdev=1041.51 00:10:21.960 lat (usec): min=233, max=41120, avg=388.61, stdev=1041.12 00:10:21.960 clat percentiles (usec): 00:10:21.960 | 1.00th=[ 229], 5.00th=[ 253], 10.00th=[ 273], 20.00th=[ 293], 00:10:21.960 | 30.00th=[ 314], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:10:21.960 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 424], 00:10:21.961 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 498], 99.95th=[41157], 00:10:21.961 | 99.99th=[41157] 00:10:21.961 write: IOPS=1680, BW=6721KiB/s (6883kB/s)(6728KiB/1001msec); 0 zone resets 00:10:21.961 slat (nsec): min=6353, max=46671, avg=16297.19, stdev=4225.02 00:10:21.961 clat (usec): min=138, max=434, avg=213.28, stdev=29.89 00:10:21.961 lat (usec): min=154, max=452, avg=229.58, stdev=29.40 00:10:21.961 clat percentiles (usec): 00:10:21.961 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 196], 00:10:21.961 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:10:21.961 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 247], 00:10:21.961 | 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 429], 99.95th=[ 437], 00:10:21.961 | 99.99th=[ 437] 00:10:21.961 bw ( KiB/s): min= 8192, max= 8192, per=31.10%, avg=8192.00, stdev= 0.00, samples=1 00:10:21.961 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:21.961 lat (usec) : 250=52.49%, 500=47.48% 00:10:21.961 lat (msec) : 50=0.03% 00:10:21.961 cpu : usr=2.90%, sys=7.10%, ctx=3219, majf=0, minf=1 00:10:21.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.961 issued rwts: total=1536,1682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.961 job3: (groupid=0, jobs=1): err= 0: pid=4188352: Tue Dec 10 22:41:29 2024 00:10:21.961 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:21.961 slat (nsec): min=6108, max=68250, avg=19386.21, stdev=8454.42 00:10:21.961 clat (usec): min=189, max=521, avg=331.24, stdev=43.81 00:10:21.961 lat (usec): min=196, max=552, avg=350.62, stdev=47.47 00:10:21.961 clat percentiles (usec): 00:10:21.961 | 1.00th=[ 212], 5.00th=[ 239], 10.00th=[ 281], 20.00th=[ 306], 00:10:21.961 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:10:21.961 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 392], 00:10:21.961 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 519], 99.95th=[ 523], 00:10:21.961 | 99.99th=[ 523] 00:10:21.961 write: IOPS=1855, BW=7421KiB/s (7599kB/s)(7428KiB/1001msec); 0 zone resets 00:10:21.961 slat (nsec): min=6944, max=69964, avg=18693.13, stdev=6903.94 00:10:21.961 clat (usec): min=142, max=1236, avg=220.33, stdev=54.95 00:10:21.961 lat (usec): min=151, max=1248, avg=239.02, stdev=56.16 00:10:21.961 clat percentiles (usec): 00:10:21.961 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 192], 00:10:21.961 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 223], 00:10:21.961 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 260], 95.00th=[ 302], 00:10:21.961 | 99.00th=[ 379], 99.50th=[ 404], 99.90th=[ 1106], 99.95th=[ 1237], 00:10:21.961 | 99.99th=[ 1237] 00:10:21.961 bw ( KiB/s): min= 8192, max= 8192, per=31.10%, avg=8192.00, stdev= 0.00, samples=1 00:10:21.961 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:21.961 lat (usec) : 250=51.49%, 500=48.25%, 750=0.15%, 1000=0.06% 00:10:21.961 lat (msec) : 2=0.06% 00:10:21.961 cpu : usr=3.30%, sys=8.30%, ctx=3395, majf=0, minf=1 00:10:21.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.961 issued rwts: total=1536,1857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.961 00:10:21.961 Run status group 0 (all jobs): 00:10:21.961 READ: bw=21.0MiB/s (22.0MB/s), 87.6KiB/s-9207KiB/s (89.8kB/s-9428kB/s), io=21.1MiB (22.1MB), run=1001-1004msec 00:10:21.961 WRITE: bw=25.7MiB/s (27.0MB/s), 2040KiB/s-9.99MiB/s (2089kB/s-10.5MB/s), io=25.8MiB (27.1MB), run=1001-1004msec 00:10:21.961 00:10:21.961 Disk stats (read/write): 00:10:21.961 nvme0n1: ios=67/512, merge=0/0, ticks=724/97, in_queue=821, util=86.37% 00:10:21.961 nvme0n2: ios=2098/2074, merge=0/0, ticks=480/332, in_queue=812, util=90.43% 00:10:21.961 nvme0n3: ios=1245/1536, merge=0/0, ticks=1329/324, in_queue=1653, util=93.40% 00:10:21.961 nvme0n4: ios=1367/1536, merge=0/0, ticks=870/339, in_queue=1209, util=94.19% 00:10:21.961 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:21.961 [global] 00:10:21.961 thread=1 00:10:21.961 invalidate=1 00:10:21.961 rw=randwrite 00:10:21.961 time_based=1 00:10:21.961 runtime=1 00:10:21.961 ioengine=libaio 00:10:21.961 direct=1 00:10:21.961 bs=4096 00:10:21.961 iodepth=1 00:10:21.961 norandommap=0 00:10:21.961 numjobs=1 00:10:21.961 00:10:21.961 verify_dump=1 00:10:21.961 verify_backlog=512 00:10:21.961 verify_state_save=0 00:10:21.961 do_verify=1 00:10:21.961 verify=crc32c-intel 00:10:21.961 [job0] 00:10:21.961 filename=/dev/nvme0n1 00:10:21.961 [job1] 00:10:21.961 filename=/dev/nvme0n2 00:10:21.961 [job2] 00:10:21.961 filename=/dev/nvme0n3 00:10:21.961 [job3] 00:10:21.961 filename=/dev/nvme0n4 00:10:21.961 Could not set queue depth (nvme0n1) 00:10:21.961 Could not set queue depth (nvme0n2) 00:10:21.961 Could not set queue depth (nvme0n3) 00:10:21.961 Could not set queue depth (nvme0n4) 00:10:22.219 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.219 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.219 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.219 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.219 fio-3.35 00:10:22.219 Starting 4 threads 00:10:23.591 00:10:23.591 job0: (groupid=0, jobs=1): err= 0: pid=4188586: Tue Dec 10 22:41:30 2024 00:10:23.591 read: IOPS=2219, BW=8879KiB/s (9092kB/s)(8888KiB/1001msec) 00:10:23.591 slat (nsec): min=4050, max=50791, avg=11450.56, stdev=7124.26 00:10:23.591 clat (usec): min=157, max=2865, avg=235.84, stdev=94.64 00:10:23.591 lat (usec): min=161, max=2882, avg=247.29, stdev=97.91 00:10:23.591 clat percentiles (usec): 00:10:23.591 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:10:23.591 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 217], 60.00th=[ 233], 00:10:23.591 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 293], 95.00th=[ 420], 00:10:23.591 | 99.00th=[ 502], 99.50th=[ 553], 99.90th=[ 1106], 99.95th=[ 1188], 00:10:23.591 | 99.99th=[ 2868] 00:10:23.591 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:23.591 slat (nsec): min=5214, max=43329, avg=11988.42, stdev=5781.27 00:10:23.592 clat (usec): min=120, max=433, avg=157.38, stdev=29.81 00:10:23.592 lat (usec): min=126, max=447, avg=169.37, stdev=32.51 00:10:23.592 clat percentiles (usec): 00:10:23.592 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 139], 00:10:23.592 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:10:23.592 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 204], 00:10:23.592 | 99.00th=[ 277], 99.50th=[ 322], 99.90th=[ 334], 99.95th=[ 375], 00:10:23.592 | 99.99th=[ 433] 00:10:23.592 bw ( KiB/s): min= 8864, max= 8864, per=40.53%, avg=8864.00, stdev= 0.00, samples=1 00:10:23.592 iops : min= 2216, max= 2216, avg=2216.00, stdev= 0.00, samples=1 00:10:23.592 lat (usec) : 250=88.77%, 500=10.69%, 750=0.42%, 1000=0.06% 00:10:23.592 lat (msec) : 2=0.04%, 4=0.02% 00:10:23.592 cpu : usr=3.50%, sys=6.10%, ctx=4783, majf=0, minf=1 00:10:23.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 issued rwts: total=2222,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.592 job1: (groupid=0, jobs=1): err= 0: pid=4188587: Tue Dec 10 22:41:30 2024 00:10:23.592 read: IOPS=258, BW=1033KiB/s (1058kB/s)(1064KiB/1030msec) 00:10:23.592 slat (nsec): min=6095, max=57864, avg=16015.31, stdev=8241.88 00:10:23.592 clat (usec): min=182, max=41095, avg=3490.30, stdev=10988.33 00:10:23.592 lat (usec): min=194, max=41115, avg=3506.32, stdev=10990.48 00:10:23.592 clat percentiles (usec): 00:10:23.592 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 225], 00:10:23.592 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 269], 00:10:23.592 | 70.00th=[ 310], 80.00th=[ 359], 90.00th=[ 461], 95.00th=[41157], 00:10:23.592 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:23.592 | 99.99th=[41157] 00:10:23.592 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:10:23.592 slat (nsec): min=5464, max=34096, avg=11462.42, stdev=5892.66 00:10:23.592 clat (usec): min=137, max=434, avg=172.50, stdev=23.37 00:10:23.592 lat (usec): min=145, max=464, avg=183.96, stdev=25.41 00:10:23.592 clat percentiles (usec): 00:10:23.592 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:23.592 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 174], 00:10:23.592 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 208], 00:10:23.592 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 437], 99.95th=[ 437], 00:10:23.592 | 99.99th=[ 437] 00:10:23.592 bw ( KiB/s): min= 4096, max= 4096, per=18.73%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.592 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.592 lat (usec) : 250=83.03%, 500=13.75%, 750=0.39% 00:10:23.592 lat (msec) : 4=0.13%, 50=2.70% 00:10:23.592 cpu : usr=0.29%, sys=1.17%, ctx=778, majf=0, minf=1 00:10:23.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 issued rwts: total=266,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.592 job2: (groupid=0, jobs=1): err= 0: pid=4188595: Tue Dec 10 22:41:30 2024 00:10:23.592 read: IOPS=1508, BW=6035KiB/s (6180kB/s)(6156KiB/1020msec) 00:10:23.592 slat (nsec): min=6929, max=48388, avg=15394.70, stdev=5221.16 00:10:23.592 clat (usec): min=194, max=41009, avg=342.41, stdev=1794.16 00:10:23.592 lat (usec): min=201, max=41022, avg=357.80, stdev=1794.04 00:10:23.592 clat percentiles (usec): 00:10:23.592 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 227], 00:10:23.592 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:10:23.592 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 322], 00:10:23.592 | 99.00th=[ 371], 99.50th=[ 433], 99.90th=[41157], 99.95th=[41157], 00:10:23.592 | 99.99th=[41157] 00:10:23.592 write: IOPS=2007, BW=8031KiB/s (8224kB/s)(8192KiB/1020msec); 0 zone resets 00:10:23.592 slat (nsec): min=7574, max=55737, avg=17810.88, stdev=7282.78 00:10:23.592 clat (usec): min=138, max=3675, avg=202.62, stdev=96.01 00:10:23.592 lat (usec): min=146, max=3683, avg=220.43, stdev=96.59 00:10:23.592 clat percentiles (usec): 00:10:23.592 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:10:23.592 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:10:23.592 | 70.00th=[ 204], 80.00th=[ 227], 90.00th=[ 258], 95.00th=[ 277], 00:10:23.592 | 99.00th=[ 383], 99.50th=[ 441], 99.90th=[ 515], 99.95th=[ 1876], 00:10:23.592 | 99.99th=[ 3687] 00:10:23.592 bw ( KiB/s): min= 8192, max= 8192, per=37.45%, avg=8192.00, stdev= 0.00, samples=2 00:10:23.592 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:23.592 lat (usec) : 250=70.73%, 500=28.99%, 750=0.14% 00:10:23.592 lat (msec) : 2=0.03%, 4=0.03%, 50=0.08% 00:10:23.592 cpu : usr=4.91%, sys=7.16%, ctx=3589, majf=0, minf=1 00:10:23.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.592 job3: (groupid=0, jobs=1): err= 0: pid=4188601: Tue Dec 10 22:41:30 2024 00:10:23.592 read: IOPS=175, BW=702KiB/s (719kB/s)(708KiB/1008msec) 00:10:23.592 slat (nsec): min=13563, max=55596, avg=20995.74, stdev=6488.06 00:10:23.592 clat (usec): min=230, max=41170, avg=4878.88, stdev=12689.43 00:10:23.592 lat (usec): min=249, max=41188, avg=4899.88, stdev=12689.49 00:10:23.592 clat percentiles (usec): 00:10:23.592 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 262], 00:10:23.592 | 30.00th=[ 273], 40.00th=[ 318], 50.00th=[ 449], 60.00th=[ 465], 00:10:23.592 | 70.00th=[ 482], 80.00th=[ 502], 90.00th=[40633], 95.00th=[41157], 00:10:23.592 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:23.592 | 99.99th=[41157] 00:10:23.592 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:23.592 slat (nsec): min=7898, max=44285, avg=18176.68, stdev=8304.42 00:10:23.592 clat (usec): min=172, max=807, avg=247.37, stdev=43.64 00:10:23.592 lat (usec): min=182, max=851, avg=265.54, stdev=42.55 00:10:23.592 clat percentiles (usec): 00:10:23.592 | 1.00th=[ 186], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:10:23.592 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 251], 00:10:23.592 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 306], 00:10:23.592 | 99.00th=[ 400], 99.50th=[ 424], 99.90th=[ 807], 99.95th=[ 807], 00:10:23.592 | 99.99th=[ 807] 00:10:23.592 bw ( KiB/s): min= 4096, max= 4096, per=18.73%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.592 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.592 lat (usec) : 250=46.15%, 500=48.48%, 750=2.32%, 1000=0.15% 00:10:23.592 lat (msec) : 50=2.90% 00:10:23.592 cpu : usr=1.39%, sys=1.19%, ctx=692, majf=0, minf=1 00:10:23.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.592 issued rwts: total=177,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.592 00:10:23.592 Run status group 0 (all jobs): 00:10:23.592 READ: bw=15.9MiB/s (16.7MB/s), 702KiB/s-8879KiB/s (719kB/s-9092kB/s), io=16.4MiB (17.2MB), run=1001-1030msec 00:10:23.592 WRITE: bw=21.4MiB/s (22.4MB/s), 1988KiB/s-9.99MiB/s (2036kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1030msec 00:10:23.592 00:10:23.592 Disk stats (read/write): 00:10:23.592 nvme0n1: ios=1880/2048, merge=0/0, ticks=459/333, in_queue=792, util=86.87% 00:10:23.592 nvme0n2: ios=261/512, merge=0/0, ticks=724/89, in_queue=813, util=86.60% 00:10:23.592 nvme0n3: ios=1593/1778, merge=0/0, ticks=1006/359, in_queue=1365, util=98.02% 00:10:23.592 nvme0n4: ios=220/512, merge=0/0, ticks=1061/122, in_queue=1183, util=97.27% 00:10:23.592 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:23.592 [global] 00:10:23.592 thread=1 00:10:23.592 invalidate=1 00:10:23.592 rw=write 00:10:23.592 time_based=1 00:10:23.592 runtime=1 00:10:23.592 ioengine=libaio 00:10:23.592 direct=1 00:10:23.592 bs=4096 00:10:23.592 iodepth=128 00:10:23.592 norandommap=0 00:10:23.592 numjobs=1 00:10:23.592 00:10:23.592 verify_dump=1 00:10:23.592 verify_backlog=512 00:10:23.592 verify_state_save=0 00:10:23.592 do_verify=1 00:10:23.592 verify=crc32c-intel 00:10:23.592 [job0] 00:10:23.592 filename=/dev/nvme0n1 00:10:23.592 [job1] 00:10:23.592 filename=/dev/nvme0n2 00:10:23.592 [job2] 00:10:23.592 filename=/dev/nvme0n3 00:10:23.592 [job3] 00:10:23.592 filename=/dev/nvme0n4 00:10:23.592 Could not set queue depth (nvme0n1) 00:10:23.592 Could not set queue depth (nvme0n2) 00:10:23.592 Could not set queue depth (nvme0n3) 00:10:23.592 Could not set queue depth (nvme0n4) 00:10:23.592 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.592 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.592 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.592 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.592 fio-3.35 00:10:23.592 Starting 4 threads 00:10:24.968 00:10:24.968 job0: (groupid=0, jobs=1): err= 0: pid=4188931: Tue Dec 10 22:41:32 2024 00:10:24.968 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:24.968 slat (usec): min=2, max=15262, avg=104.45, stdev=715.88 00:10:24.968 clat (usec): min=5668, max=36650, avg=13302.81, stdev=4483.19 00:10:24.968 lat (usec): min=5682, max=36658, avg=13407.26, stdev=4534.82 00:10:24.968 clat percentiles (usec): 00:10:24.968 | 1.00th=[ 7570], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[10945], 00:10:24.968 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:10:24.968 | 70.00th=[12649], 80.00th=[13566], 90.00th=[20317], 95.00th=[23725], 00:10:24.968 | 99.00th=[28705], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:10:24.968 | 99.99th=[36439] 00:10:24.968 write: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1001msec); 0 zone resets 00:10:24.968 slat (usec): min=3, max=12958, avg=123.66, stdev=757.77 00:10:24.968 clat (usec): min=671, max=117006, avg=17430.34, stdev=18235.46 00:10:24.968 lat (usec): min=1746, max=117026, avg=17554.00, stdev=18334.40 00:10:24.968 clat percentiles (msec): 00:10:24.968 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 10], 00:10:24.968 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:10:24.968 | 70.00th=[ 13], 80.00th=[ 19], 90.00th=[ 36], 95.00th=[ 59], 00:10:24.968 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 117], 99.95th=[ 117], 00:10:24.968 | 99.99th=[ 117] 00:10:24.968 bw ( KiB/s): min=12912, max=19856, per=23.81%, avg=16384.00, stdev=4910.15, samples=2 00:10:24.968 iops : min= 3228, max= 4964, avg=4096.00, stdev=1227.54, samples=2 00:10:24.968 lat (usec) : 750=0.01% 00:10:24.968 lat (msec) : 2=0.10%, 4=0.30%, 10=15.10%, 20=69.29%, 50=12.02% 00:10:24.968 lat (msec) : 100=2.51%, 250=0.67% 00:10:24.968 cpu : usr=3.70%, sys=5.70%, ctx=355, majf=0, minf=1 00:10:24.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:24.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.969 issued rwts: total=4096,4148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.969 job1: (groupid=0, jobs=1): err= 0: pid=4188932: Tue Dec 10 22:41:32 2024 00:10:24.969 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:10:24.969 slat (usec): min=2, max=12206, avg=96.80, stdev=678.78 00:10:24.969 clat (usec): min=4120, max=32853, avg=12233.77, stdev=3340.38 00:10:24.969 lat (usec): min=4128, max=32858, avg=12330.57, stdev=3385.37 00:10:24.969 clat percentiles (usec): 00:10:24.969 | 1.00th=[ 5342], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10159], 00:10:24.969 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:10:24.969 | 70.00th=[12911], 80.00th=[13829], 90.00th=[16319], 95.00th=[18744], 00:10:24.969 | 99.00th=[25297], 99.50th=[30540], 99.90th=[32113], 99.95th=[32900], 00:10:24.969 | 99.99th=[32900] 00:10:24.969 write: IOPS=5540, BW=21.6MiB/s (22.7MB/s)(21.8MiB/1009msec); 0 zone resets 00:10:24.969 slat (usec): min=4, max=31971, avg=80.86, stdev=580.17 00:10:24.969 clat (usec): min=3154, max=39879, avg=11080.37, stdev=3570.82 00:10:24.969 lat (usec): min=3176, max=39895, avg=11161.23, stdev=3616.18 00:10:24.969 clat percentiles (usec): 00:10:24.969 | 1.00th=[ 4080], 5.00th=[ 5866], 10.00th=[ 7439], 20.00th=[ 9896], 00:10:24.969 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11469], 00:10:24.969 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12518], 95.00th=[14615], 00:10:24.969 | 99.00th=[24511], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:10:24.969 | 99.99th=[40109] 00:10:24.969 bw ( KiB/s): min=21616, max=22124, per=31.78%, avg=21870.00, stdev=359.21, samples=2 00:10:24.969 iops : min= 5404, max= 5531, avg=5467.50, stdev=89.80, samples=2 00:10:24.969 lat (msec) : 4=0.48%, 10=18.35%, 20=78.42%, 50=2.75% 00:10:24.969 cpu : usr=6.94%, sys=10.42%, ctx=579, majf=0, minf=1 00:10:24.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.969 issued rwts: total=5120,5590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.969 job2: (groupid=0, jobs=1): err= 0: pid=4188933: Tue Dec 10 22:41:32 2024 00:10:24.969 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:24.969 slat (usec): min=2, max=28042, avg=179.34, stdev=1309.40 00:10:24.969 clat (usec): min=5049, max=85844, avg=21185.70, stdev=13029.42 00:10:24.969 lat (usec): min=5057, max=85850, avg=21365.04, stdev=13166.13 00:10:24.969 clat percentiles (usec): 00:10:24.969 | 1.00th=[ 5211], 5.00th=[10814], 10.00th=[12125], 20.00th=[13698], 00:10:24.969 | 30.00th=[13960], 40.00th=[14877], 50.00th=[15139], 60.00th=[18482], 00:10:24.969 | 70.00th=[23987], 80.00th=[28443], 90.00th=[35914], 95.00th=[45351], 00:10:24.969 | 99.00th=[76022], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:10:24.969 | 99.99th=[85459] 00:10:24.969 write: IOPS=3002, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1004msec); 0 zone resets 00:10:24.969 slat (usec): min=3, max=24015, avg=166.23, stdev=956.33 00:10:24.969 clat (usec): min=511, max=85840, avg=22924.95, stdev=14711.04 00:10:24.969 lat (usec): min=4297, max=85852, avg=23091.18, stdev=14798.28 00:10:24.969 clat percentiles (usec): 00:10:24.969 | 1.00th=[ 4817], 5.00th=[ 9765], 10.00th=[12649], 20.00th=[13042], 00:10:24.969 | 30.00th=[14615], 40.00th=[15270], 50.00th=[15795], 60.00th=[19268], 00:10:24.969 | 70.00th=[24511], 80.00th=[29492], 90.00th=[52691], 95.00th=[58983], 00:10:24.969 | 99.00th=[65274], 99.50th=[66323], 99.90th=[76022], 99.95th=[85459], 00:10:24.969 | 99.99th=[85459] 00:10:24.969 bw ( KiB/s): min=10808, max=12288, per=16.78%, avg=11548.00, stdev=1046.52, samples=2 00:10:24.969 iops : min= 2702, max= 3072, avg=2887.00, stdev=261.63, samples=2 00:10:24.969 lat (usec) : 750=0.02% 00:10:24.969 lat (msec) : 10=4.07%, 20=60.56%, 50=27.21%, 100=8.14% 00:10:24.969 cpu : usr=2.39%, sys=5.58%, ctx=316, majf=0, minf=1 00:10:24.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.969 issued rwts: total=2560,3015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.969 job3: (groupid=0, jobs=1): err= 0: pid=4188934: Tue Dec 10 22:41:32 2024 00:10:24.969 read: IOPS=4205, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:10:24.969 slat (usec): min=3, max=15566, avg=121.15, stdev=760.39 00:10:24.969 clat (usec): min=1236, max=52006, avg=14720.82, stdev=5154.06 00:10:24.969 lat (usec): min=3482, max=52022, avg=14841.97, stdev=5216.83 00:10:24.969 clat percentiles (usec): 00:10:24.969 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[12256], 00:10:24.969 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[14222], 00:10:24.969 | 70.00th=[14615], 80.00th=[16319], 90.00th=[18220], 95.00th=[26084], 00:10:24.969 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[42206], 00:10:24.969 | 99.99th=[52167] 00:10:24.969 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:24.969 slat (usec): min=4, max=11240, avg=96.32, stdev=559.30 00:10:24.969 clat (usec): min=6881, max=43950, avg=14066.78, stdev=3869.33 00:10:24.969 lat (usec): min=6896, max=43962, avg=14163.10, stdev=3919.00 00:10:24.969 clat percentiles (usec): 00:10:24.969 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[11207], 20.00th=[12125], 00:10:24.969 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13566], 00:10:24.969 | 70.00th=[14222], 80.00th=[15270], 90.00th=[17171], 95.00th=[23987], 00:10:24.969 | 99.00th=[29230], 99.50th=[32900], 99.90th=[35914], 99.95th=[36439], 00:10:24.969 | 99.99th=[43779] 00:10:24.969 bw ( KiB/s): min=16448, max=20400, per=26.77%, avg=18424.00, stdev=2794.49, samples=2 00:10:24.969 iops : min= 4112, max= 5100, avg=4606.00, stdev=698.62, samples=2 00:10:24.969 lat (msec) : 2=0.01%, 4=0.09%, 10=5.57%, 20=86.40%, 50=7.92% 00:10:24.969 lat (msec) : 100=0.01% 00:10:24.969 cpu : usr=5.48%, sys=10.67%, ctx=406, majf=0, minf=1 00:10:24.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.969 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.969 00:10:24.969 Run status group 0 (all jobs): 00:10:24.969 READ: bw=61.9MiB/s (64.9MB/s), 9.96MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=62.5MiB (65.5MB), run=1001-1009msec 00:10:24.969 WRITE: bw=67.2MiB/s (70.5MB/s), 11.7MiB/s-21.6MiB/s (12.3MB/s-22.7MB/s), io=67.8MiB (71.1MB), run=1001-1009msec 00:10:24.969 00:10:24.969 Disk stats (read/write): 00:10:24.969 nvme0n1: ios=3122/3583, merge=0/0, ticks=27962/50631, in_queue=78593, util=86.97% 00:10:24.969 nvme0n2: ios=4378/4608, merge=0/0, ticks=50630/47741, in_queue=98371, util=99.19% 00:10:24.969 nvme0n3: ios=2069/2248, merge=0/0, ticks=42922/48825, in_queue=91747, util=98.02% 00:10:24.969 nvme0n4: ios=3639/3908, merge=0/0, ticks=27422/24833, in_queue=52255, util=98.43% 00:10:24.969 22:41:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:24.969 [global] 00:10:24.969 thread=1 00:10:24.969 invalidate=1 00:10:24.969 rw=randwrite 00:10:24.969 time_based=1 00:10:24.969 runtime=1 00:10:24.969 ioengine=libaio 00:10:24.969 direct=1 00:10:24.969 bs=4096 00:10:24.969 iodepth=128 00:10:24.969 norandommap=0 00:10:24.969 numjobs=1 00:10:24.969 00:10:24.969 verify_dump=1 00:10:24.969 verify_backlog=512 00:10:24.969 verify_state_save=0 00:10:24.969 do_verify=1 00:10:24.969 verify=crc32c-intel 00:10:24.969 [job0] 00:10:24.969 filename=/dev/nvme0n1 00:10:24.969 [job1] 00:10:24.969 filename=/dev/nvme0n2 00:10:24.969 [job2] 00:10:24.969 filename=/dev/nvme0n3 00:10:24.969 [job3] 00:10:24.969 filename=/dev/nvme0n4 00:10:24.969 Could not set queue depth (nvme0n1) 00:10:24.969 Could not set queue depth (nvme0n2) 00:10:24.969 Could not set queue depth (nvme0n3) 00:10:24.969 Could not set queue depth (nvme0n4) 00:10:24.969 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.969 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.969 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.969 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.969 fio-3.35 00:10:24.969 Starting 4 threads 00:10:26.343 00:10:26.343 job0: (groupid=0, jobs=1): err= 0: pid=4189166: Tue Dec 10 22:41:33 2024 00:10:26.343 read: IOPS=4345, BW=17.0MiB/s (17.8MB/s)(17.7MiB/1044msec) 00:10:26.343 slat (usec): min=2, max=12928, avg=97.30, stdev=560.87 00:10:26.343 clat (usec): min=6269, max=63570, avg=13469.84, stdev=8562.81 00:10:26.343 lat (usec): min=6898, max=66640, avg=13567.14, stdev=8589.18 00:10:26.343 clat percentiles (usec): 00:10:26.343 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10421], 00:10:26.343 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:10:26.343 | 70.00th=[12387], 80.00th=[13566], 90.00th=[19268], 95.00th=[23725], 00:10:26.343 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63701], 99.95th=[63701], 00:10:26.343 | 99.99th=[63701] 00:10:26.343 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:10:26.343 slat (usec): min=4, max=8823, avg=110.73, stdev=536.35 00:10:26.343 clat (usec): min=6064, max=60497, avg=15383.00, stdev=10649.82 00:10:26.343 lat (usec): min=6084, max=60507, avg=15493.74, stdev=10731.62 00:10:26.343 clat percentiles (usec): 00:10:26.343 | 1.00th=[ 6783], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10552], 00:10:26.343 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:10:26.343 | 70.00th=[13435], 80.00th=[18744], 90.00th=[23462], 95.00th=[46924], 00:10:26.343 | 99.00th=[58983], 99.50th=[58983], 99.90th=[60556], 99.95th=[60556], 00:10:26.343 | 99.99th=[60556] 00:10:26.343 bw ( KiB/s): min=14264, max=22600, per=30.73%, avg=18432.00, stdev=5894.44, samples=2 00:10:26.343 iops : min= 3566, max= 5650, avg=4608.00, stdev=1473.61, samples=2 00:10:26.343 lat (msec) : 10=8.66%, 20=81.00%, 50=6.80%, 100=3.54% 00:10:26.343 cpu : usr=6.81%, sys=8.63%, ctx=587, majf=0, minf=1 00:10:26.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:26.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.343 issued rwts: total=4537,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.343 job1: (groupid=0, jobs=1): err= 0: pid=4189167: Tue Dec 10 22:41:33 2024 00:10:26.343 read: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1002msec) 00:10:26.343 slat (usec): min=2, max=13703, avg=101.79, stdev=595.58 00:10:26.343 clat (usec): min=1205, max=36038, avg=12922.61, stdev=3810.15 00:10:26.343 lat (usec): min=1666, max=36052, avg=13024.40, stdev=3854.16 00:10:26.343 clat percentiles (usec): 00:10:26.343 | 1.00th=[ 5342], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11338], 00:10:26.343 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:10:26.343 | 70.00th=[12649], 80.00th=[14615], 90.00th=[16909], 95.00th=[17957], 00:10:26.343 | 99.00th=[30016], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:10:26.343 | 99.99th=[35914] 00:10:26.343 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:26.343 slat (usec): min=3, max=8622, avg=106.96, stdev=594.24 00:10:26.343 clat (usec): min=7479, max=45186, avg=14646.71, stdev=6536.22 00:10:26.343 lat (usec): min=7486, max=45193, avg=14753.66, stdev=6588.14 00:10:26.343 clat percentiles (usec): 00:10:26.343 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10814], 20.00th=[11207], 00:10:26.343 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:10:26.343 | 70.00th=[12256], 80.00th=[20317], 90.00th=[25560], 95.00th=[26870], 00:10:26.343 | 99.00th=[37487], 99.50th=[38536], 99.90th=[45351], 99.95th=[45351], 00:10:26.343 | 99.99th=[45351] 00:10:26.343 bw ( KiB/s): min=15696, max=21168, per=30.73%, avg=18432.00, stdev=3869.29, samples=2 00:10:26.343 iops : min= 3924, max= 5292, avg=4608.00, stdev=967.32, samples=2 00:10:26.343 lat (msec) : 2=0.12%, 10=7.86%, 20=79.67%, 50=12.35% 00:10:26.343 cpu : usr=5.99%, sys=8.39%, ctx=429, majf=0, minf=1 00:10:26.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:26.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.343 issued rwts: total=4537,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.343 job2: (groupid=0, jobs=1): err= 0: pid=4189168: Tue Dec 10 22:41:33 2024 00:10:26.343 read: IOPS=2912, BW=11.4MiB/s (11.9MB/s)(11.9MiB/1043msec) 00:10:26.343 slat (usec): min=2, max=16143, avg=170.58, stdev=994.34 00:10:26.343 clat (usec): min=6387, max=59108, avg=22617.12, stdev=10670.28 00:10:26.343 lat (usec): min=6396, max=69999, avg=22787.70, stdev=10701.17 00:10:26.343 clat percentiles (usec): 00:10:26.343 | 1.00th=[ 6521], 5.00th=[11863], 10.00th=[12387], 20.00th=[13698], 00:10:26.343 | 30.00th=[14746], 40.00th=[17171], 50.00th=[20055], 60.00th=[25560], 00:10:26.344 | 70.00th=[28181], 80.00th=[29230], 90.00th=[32113], 95.00th=[40109], 00:10:26.344 | 99.00th=[58459], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:10:26.344 | 99.99th=[58983] 00:10:26.344 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:10:26.344 slat (usec): min=3, max=8319, avg=149.71, stdev=608.30 00:10:26.344 clat (usec): min=3658, max=40230, avg=20562.86, stdev=6396.91 00:10:26.344 lat (usec): min=3667, max=40243, avg=20712.57, stdev=6443.53 00:10:26.344 clat percentiles (usec): 00:10:26.344 | 1.00th=[10159], 5.00th=[11731], 10.00th=[12125], 20.00th=[13173], 00:10:26.344 | 30.00th=[15926], 40.00th=[17171], 50.00th=[22414], 60.00th=[24773], 00:10:26.344 | 70.00th=[25035], 80.00th=[26084], 90.00th=[28705], 95.00th=[30278], 00:10:26.344 | 99.00th=[31851], 99.50th=[33424], 99.90th=[34866], 99.95th=[34866], 00:10:26.344 | 99.99th=[40109] 00:10:26.344 bw ( KiB/s): min= 8192, max=16384, per=20.48%, avg=12288.00, stdev=5792.62, samples=2 00:10:26.344 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:26.344 lat (msec) : 4=0.02%, 10=1.88%, 20=46.76%, 50=49.28%, 100=2.06% 00:10:26.344 cpu : usr=2.21%, sys=4.61%, ctx=376, majf=0, minf=1 00:10:26.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:26.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.344 issued rwts: total=3038,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.344 job3: (groupid=0, jobs=1): err= 0: pid=4189169: Tue Dec 10 22:41:33 2024 00:10:26.344 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:26.344 slat (usec): min=3, max=10846, avg=151.23, stdev=813.16 00:10:26.344 clat (usec): min=8829, max=30883, avg=19086.51, stdev=3889.97 00:10:26.344 lat (usec): min=8836, max=30924, avg=19237.74, stdev=3967.97 00:10:26.344 clat percentiles (usec): 00:10:26.344 | 1.00th=[ 8848], 5.00th=[11994], 10.00th=[13304], 20.00th=[15139], 00:10:26.344 | 30.00th=[16057], 40.00th=[19006], 50.00th=[20055], 60.00th=[21365], 00:10:26.344 | 70.00th=[21627], 80.00th=[22152], 90.00th=[23200], 95.00th=[24249], 00:10:26.344 | 99.00th=[27132], 99.50th=[27919], 99.90th=[28967], 99.95th=[30540], 00:10:26.344 | 99.99th=[30802] 00:10:26.344 write: IOPS=3365, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1001msec); 0 zone resets 00:10:26.344 slat (usec): min=4, max=8758, avg=147.95, stdev=688.22 00:10:26.344 clat (usec): min=566, max=36969, avg=20105.31, stdev=5990.32 00:10:26.344 lat (usec): min=3435, max=36989, avg=20253.26, stdev=6040.37 00:10:26.344 clat percentiles (usec): 00:10:26.344 | 1.00th=[ 4293], 5.00th=[12780], 10.00th=[12911], 20.00th=[14877], 00:10:26.344 | 30.00th=[15664], 40.00th=[18744], 50.00th=[19530], 60.00th=[23200], 00:10:26.344 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26084], 95.00th=[30016], 00:10:26.344 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:10:26.344 | 99.99th=[36963] 00:10:26.344 bw ( KiB/s): min=10384, max=15552, per=21.62%, avg=12968.00, stdev=3654.33, samples=2 00:10:26.344 iops : min= 2596, max= 3888, avg=3242.00, stdev=913.58, samples=2 00:10:26.344 lat (usec) : 750=0.02% 00:10:26.344 lat (msec) : 4=0.33%, 10=1.88%, 20=49.60%, 50=48.18% 00:10:26.344 cpu : usr=4.50%, sys=7.80%, ctx=343, majf=0, minf=1 00:10:26.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:26.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.344 issued rwts: total=3072,3369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.344 00:10:26.344 Run status group 0 (all jobs): 00:10:26.344 READ: bw=56.8MiB/s (59.6MB/s), 11.4MiB/s-17.7MiB/s (11.9MB/s-18.5MB/s), io=59.3MiB (62.2MB), run=1001-1044msec 00:10:26.344 WRITE: bw=58.6MiB/s (61.4MB/s), 11.5MiB/s-18.0MiB/s (12.1MB/s-18.8MB/s), io=61.2MiB (64.1MB), run=1001-1044msec 00:10:26.344 00:10:26.344 Disk stats (read/write): 00:10:26.344 nvme0n1: ios=3619/3959, merge=0/0, ticks=21826/29638, in_queue=51464, util=100.00% 00:10:26.344 nvme0n2: ios=3633/3936, merge=0/0, ticks=16872/19136, in_queue=36008, util=88.83% 00:10:26.344 nvme0n3: ios=2617/2839, merge=0/0, ticks=18936/21891, in_queue=40827, util=95.32% 00:10:26.344 nvme0n4: ios=2604/2943, merge=0/0, ticks=19211/23848, in_queue=43059, util=100.00% 00:10:26.344 22:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:26.344 22:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4189305 00:10:26.344 22:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:26.344 22:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:26.344 [global] 00:10:26.344 thread=1 00:10:26.344 invalidate=1 00:10:26.344 rw=read 00:10:26.344 time_based=1 00:10:26.344 runtime=10 00:10:26.344 ioengine=libaio 00:10:26.344 direct=1 00:10:26.344 bs=4096 00:10:26.344 iodepth=1 00:10:26.344 norandommap=1 00:10:26.344 numjobs=1 00:10:26.344 00:10:26.344 [job0] 00:10:26.344 filename=/dev/nvme0n1 00:10:26.344 [job1] 00:10:26.344 filename=/dev/nvme0n2 00:10:26.344 [job2] 00:10:26.344 filename=/dev/nvme0n3 00:10:26.344 [job3] 00:10:26.344 filename=/dev/nvme0n4 00:10:26.344 Could not set queue depth (nvme0n1) 00:10:26.344 Could not set queue depth (nvme0n2) 00:10:26.344 Could not set queue depth (nvme0n3) 00:10:26.344 Could not set queue depth (nvme0n4) 00:10:26.602 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.602 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.602 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.602 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.602 fio-3.35 00:10:26.602 Starting 4 threads 00:10:29.882 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:29.882 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:29.882 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3166208, buflen=4096 00:10:29.882 fio: pid=4189401, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:29.882 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.882 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:29.882 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4853760, buflen=4096 00:10:29.882 fio: pid=4189400, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.139 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58621952, buflen=4096 00:10:30.139 fio: pid=4189397, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.140 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.140 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:30.398 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.398 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:30.398 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=32874496, buflen=4096 00:10:30.398 fio: pid=4189398, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:30.656 00:10:30.656 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4189397: Tue Dec 10 22:41:38 2024 00:10:30.656 read: IOPS=4117, BW=16.1MiB/s (16.9MB/s)(55.9MiB/3476msec) 00:10:30.656 slat (usec): min=4, max=11763, avg=12.00, stdev=152.09 00:10:30.656 clat (usec): min=155, max=40995, avg=227.89, stdev=487.95 00:10:30.656 lat (usec): min=160, max=41010, avg=239.89, stdev=511.41 00:10:30.656 clat percentiles (usec): 00:10:30.656 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:10:30.656 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 212], 00:10:30.656 | 70.00th=[ 239], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 297], 00:10:30.656 | 99.00th=[ 334], 99.50th=[ 388], 99.90th=[ 545], 99.95th=[ 791], 00:10:30.656 | 99.99th=[40633] 00:10:30.656 bw ( KiB/s): min=11400, max=20568, per=63.54%, avg=16260.00, stdev=3427.18, samples=6 00:10:30.656 iops : min= 2850, max= 5142, avg=4065.00, stdev=856.80, samples=6 00:10:30.656 lat (usec) : 250=72.01%, 500=27.79%, 750=0.14%, 1000=0.01% 00:10:30.656 lat (msec) : 2=0.02%, 10=0.01%, 50=0.01% 00:10:30.656 cpu : usr=2.16%, sys=5.18%, ctx=14320, majf=0, minf=1 00:10:30.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.656 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.656 issued rwts: total=14313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.656 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=4189398: Tue Dec 10 22:41:38 2024 00:10:30.656 read: IOPS=2113, BW=8453KiB/s (8656kB/s)(31.4MiB/3798msec) 00:10:30.656 slat (usec): min=5, max=30526, avg=23.01, stdev=452.70 00:10:30.656 clat (usec): min=175, max=42209, avg=447.08, stdev=2975.83 00:10:30.656 lat (usec): min=180, max=42216, avg=469.17, stdev=3008.53 00:10:30.656 clat percentiles (usec): 00:10:30.656 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:30.656 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:10:30.656 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 273], 00:10:30.656 | 99.00th=[ 371], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:30.656 | 99.99th=[42206] 00:10:30.656 bw ( KiB/s): min= 240, max=16536, per=34.86%, avg=8919.00, stdev=7417.80, samples=7 00:10:30.656 iops : min= 60, max= 4134, avg=2229.71, stdev=1854.45, samples=7 00:10:30.656 lat (usec) : 250=83.28%, 500=16.00%, 750=0.16% 00:10:30.656 lat (msec) : 2=0.01%, 50=0.54% 00:10:30.656 cpu : usr=1.82%, sys=3.95%, ctx=8035, majf=0, minf=2 00:10:30.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.656 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.656 issued rwts: total=8027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.656 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4189400: Tue Dec 10 22:41:38 2024 00:10:30.656 read: IOPS=370, BW=1480KiB/s (1516kB/s)(4740KiB/3202msec) 00:10:30.656 slat (nsec): min=4584, max=69177, avg=9866.42, stdev=5196.34 00:10:30.656 clat (usec): min=185, max=41313, avg=2669.92, stdev=9673.13 00:10:30.656 lat (usec): min=190, max=41331, avg=2679.77, stdev=9675.17 00:10:30.656 clat percentiles (usec): 00:10:30.656 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:10:30.656 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:10:30.656 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[41157], 00:10:30.656 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.656 | 99.99th=[41157] 00:10:30.656 bw ( KiB/s): min= 96, max= 5224, per=6.15%, avg=1573.33, stdev=2243.59, samples=6 00:10:30.656 iops : min= 24, max= 1306, avg=393.33, stdev=560.90, samples=6 00:10:30.656 lat (usec) : 250=82.55%, 500=11.21%, 750=0.08% 00:10:30.656 lat (msec) : 2=0.08%, 50=5.99% 00:10:30.656 cpu : usr=0.22%, sys=0.31%, ctx=1187, majf=0, minf=2 00:10:30.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.656 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.656 issued rwts: total=1186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.656 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4189401: Tue Dec 10 22:41:38 2024 00:10:30.657 read: IOPS=265, BW=1060KiB/s (1086kB/s)(3092KiB/2916msec) 00:10:30.657 slat (nsec): min=4681, max=37448, avg=11256.57, stdev=5757.01 00:10:30.657 clat (usec): min=208, max=41279, avg=3728.69, stdev=11390.53 00:10:30.657 lat (usec): min=215, max=41297, avg=3739.94, stdev=11392.94 00:10:30.657 clat percentiles (usec): 00:10:30.657 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 237], 00:10:30.657 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:30.657 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 322], 95.00th=[41157], 00:10:30.657 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.657 | 99.99th=[41157] 00:10:30.657 bw ( KiB/s): min= 96, max= 5200, per=4.77%, avg=1220.80, stdev=2232.20, samples=5 00:10:30.657 iops : min= 24, max= 1300, avg=305.20, stdev=558.05, samples=5 00:10:30.657 lat (usec) : 250=53.75%, 500=37.60% 00:10:30.657 lat (msec) : 50=8.53% 00:10:30.657 cpu : usr=0.00%, sys=0.45%, ctx=774, majf=0, minf=2 00:10:30.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.657 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.657 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.657 00:10:30.657 Run status group 0 (all jobs): 00:10:30.657 READ: bw=25.0MiB/s (26.2MB/s), 1060KiB/s-16.1MiB/s (1086kB/s-16.9MB/s), io=94.9MiB (99.5MB), run=2916-3798msec 00:10:30.657 00:10:30.657 Disk stats (read/write): 00:10:30.657 nvme0n1: ios=13837/0, merge=0/0, ticks=3318/0, in_queue=3318, util=98.34% 00:10:30.657 nvme0n2: ios=8021/0, merge=0/0, ticks=3300/0, in_queue=3300, util=94.51% 00:10:30.657 nvme0n3: ios=1183/0, merge=0/0, ticks=3068/0, in_queue=3068, util=96.82% 00:10:30.657 nvme0n4: ios=772/0, merge=0/0, ticks=2842/0, in_queue=2842, util=96.75% 00:10:30.657 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.657 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:30.915 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.915 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:31.479 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.479 22:41:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:31.479 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.480 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4189305 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:32.045 nvmf hotplug test: fio failed as expected 00:10:32.045 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.303 rmmod nvme_tcp 00:10:32.303 rmmod nvme_fabrics 00:10:32.303 rmmod nvme_keyring 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4187270 ']' 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4187270 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4187270 ']' 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4187270 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4187270 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4187270' 00:10:32.303 killing process with pid 4187270 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4187270 00:10:32.303 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4187270 00:10:32.562 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.563 22:41:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.102 00:10:35.102 real 0m24.228s 00:10:35.102 user 1m25.398s 00:10:35.102 sys 0m7.209s 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.102 ************************************ 00:10:35.102 END TEST nvmf_fio_target 00:10:35.102 ************************************ 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.102 ************************************ 00:10:35.102 START TEST nvmf_bdevio 00:10:35.102 ************************************ 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.102 * Looking for test storage... 00:10:35.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.102 --rc genhtml_branch_coverage=1 00:10:35.102 --rc genhtml_function_coverage=1 00:10:35.102 --rc genhtml_legend=1 00:10:35.102 --rc geninfo_all_blocks=1 00:10:35.102 --rc geninfo_unexecuted_blocks=1 00:10:35.102 00:10:35.102 ' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.102 --rc genhtml_branch_coverage=1 00:10:35.102 --rc genhtml_function_coverage=1 00:10:35.102 --rc genhtml_legend=1 00:10:35.102 --rc geninfo_all_blocks=1 00:10:35.102 --rc geninfo_unexecuted_blocks=1 00:10:35.102 00:10:35.102 ' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.102 --rc genhtml_branch_coverage=1 00:10:35.102 --rc genhtml_function_coverage=1 00:10:35.102 --rc genhtml_legend=1 00:10:35.102 --rc geninfo_all_blocks=1 00:10:35.102 --rc geninfo_unexecuted_blocks=1 00:10:35.102 00:10:35.102 ' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.102 --rc genhtml_branch_coverage=1 00:10:35.102 --rc genhtml_function_coverage=1 00:10:35.102 --rc genhtml_legend=1 00:10:35.102 --rc geninfo_all_blocks=1 00:10:35.102 --rc geninfo_unexecuted_blocks=1 00:10:35.102 00:10:35.102 ' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.102 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.103 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:37.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:37.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.008 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:37.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:37.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.009 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:10:37.269 00:10:37.269 --- 10.0.0.2 ping statistics --- 00:10:37.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.269 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:10:37.269 00:10:37.269 --- 10.0.0.1 ping statistics --- 00:10:37.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.269 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4192166 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4192166 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4192166 ']' 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.269 22:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.527 [2024-12-10 22:41:45.019153] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:37.527 [2024-12-10 22:41:45.019246] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.527 [2024-12-10 22:41:45.091136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.527 [2024-12-10 22:41:45.145757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.527 [2024-12-10 22:41:45.145817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.527 [2024-12-10 22:41:45.145846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.527 [2024-12-10 22:41:45.145858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.527 [2024-12-10 22:41:45.145869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.527 [2024-12-10 22:41:45.147587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:37.527 [2024-12-10 22:41:45.147638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:37.527 [2024-12-10 22:41:45.147689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:37.527 [2024-12-10 22:41:45.147692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.785 [2024-12-10 22:41:45.300845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.785 Malloc0 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.785 [2024-12-10 22:41:45.370005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:37.785 { 00:10:37.785 "params": { 00:10:37.785 "name": "Nvme$subsystem", 00:10:37.785 "trtype": "$TEST_TRANSPORT", 00:10:37.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.785 "adrfam": "ipv4", 00:10:37.785 "trsvcid": "$NVMF_PORT", 00:10:37.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.785 "hdgst": ${hdgst:-false}, 00:10:37.785 "ddgst": ${ddgst:-false} 00:10:37.785 }, 00:10:37.785 "method": "bdev_nvme_attach_controller" 00:10:37.785 } 00:10:37.785 EOF 00:10:37.785 )") 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:37.785 22:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:37.785 "params": { 00:10:37.785 "name": "Nvme1", 00:10:37.786 "trtype": "tcp", 00:10:37.786 "traddr": "10.0.0.2", 00:10:37.786 "adrfam": "ipv4", 00:10:37.786 "trsvcid": "4420", 00:10:37.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.786 "hdgst": false, 00:10:37.786 "ddgst": false 00:10:37.786 }, 00:10:37.786 "method": "bdev_nvme_attach_controller" 00:10:37.786 }' 00:10:37.786 [2024-12-10 22:41:45.420615] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:37.786 [2024-12-10 22:41:45.420685] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4192190 ] 00:10:37.786 [2024-12-10 22:41:45.492097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:38.043 [2024-12-10 22:41:45.557596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.043 [2024-12-10 22:41:45.557625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.043 [2024-12-10 22:41:45.557629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.301 I/O targets: 00:10:38.301 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:38.301 00:10:38.301 00:10:38.301 CUnit - A unit testing framework for C - Version 2.1-3 00:10:38.301 http://cunit.sourceforge.net/ 00:10:38.301 00:10:38.301 00:10:38.301 Suite: bdevio tests on: Nvme1n1 00:10:38.301 Test: blockdev write read block ...passed 00:10:38.301 Test: blockdev write zeroes read block ...passed 00:10:38.301 Test: blockdev write zeroes read no split ...passed 00:10:38.301 Test: blockdev write zeroes read split ...passed 00:10:38.301 Test: blockdev write zeroes read split partial ...passed 00:10:38.301 Test: blockdev reset ...[2024-12-10 22:41:45.935388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:38.301 [2024-12-10 22:41:45.935496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db8920 (9): Bad file descriptor 00:10:38.301 [2024-12-10 22:41:45.951667] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:38.301 passed 00:10:38.301 Test: blockdev write read 8 blocks ...passed 00:10:38.559 Test: blockdev write read size > 128k ...passed 00:10:38.559 Test: blockdev write read invalid size ...passed 00:10:38.559 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.559 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.559 Test: blockdev write read max offset ...passed 00:10:38.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.559 Test: blockdev writev readv 8 blocks ...passed 00:10:38.559 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.559 Test: blockdev writev readv block ...passed 00:10:38.559 Test: blockdev writev readv size > 128k ...passed 00:10:38.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.559 Test: blockdev comparev and writev ...[2024-12-10 22:41:46.207600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.207639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:38.559 [2024-12-10 22:41:46.207665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.207684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:38.559 [2024-12-10 22:41:46.207987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.208012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:38.559 [2024-12-10 22:41:46.208035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.208052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:38.559 [2024-12-10 22:41:46.208365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.208389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:38.559 [2024-12-10 22:41:46.208410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.208428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:38.559 [2024-12-10 22:41:46.208752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.208776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:38.559 [2024-12-10 22:41:46.208798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.559 [2024-12-10 22:41:46.208815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:38.559 passed 00:10:38.817 Test: blockdev nvme passthru rw ...passed 00:10:38.817 Test: blockdev nvme passthru vendor specific ...[2024-12-10 22:41:46.291790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.817 [2024-12-10 22:41:46.291819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:38.817 [2024-12-10 22:41:46.291967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.817 [2024-12-10 22:41:46.291992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:38.817 [2024-12-10 22:41:46.292135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.817 [2024-12-10 22:41:46.292168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:38.817 [2024-12-10 22:41:46.292315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.817 [2024-12-10 22:41:46.292338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:38.817 passed 00:10:38.817 Test: blockdev nvme admin passthru ...passed 00:10:38.817 Test: blockdev copy ...passed 00:10:38.817 00:10:38.817 Run Summary: Type Total Ran Passed Failed Inactive 00:10:38.817 suites 1 1 n/a 0 0 00:10:38.817 tests 23 23 23 0 0 00:10:38.817 asserts 152 152 152 0 n/a 00:10:38.817 00:10:38.817 Elapsed time = 1.130 seconds 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.075 rmmod nvme_tcp 00:10:39.075 rmmod nvme_fabrics 00:10:39.075 rmmod nvme_keyring 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4192166 ']' 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4192166 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4192166 ']' 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4192166 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4192166 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4192166' 00:10:39.075 killing process with pid 4192166 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4192166 00:10:39.075 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4192166 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.333 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.236 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.236 00:10:41.236 real 0m6.655s 00:10:41.236 user 0m10.040s 00:10:41.236 sys 0m2.254s 00:10:41.236 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.236 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.236 ************************************ 00:10:41.236 END TEST nvmf_bdevio 00:10:41.236 ************************************ 00:10:41.497 22:41:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:41.497 00:10:41.497 real 3m57.132s 00:10:41.497 user 10m20.160s 00:10:41.497 sys 1m7.640s 00:10:41.497 22:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.497 22:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.497 ************************************ 00:10:41.497 END TEST nvmf_target_core 00:10:41.497 ************************************ 00:10:41.497 22:41:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:41.497 22:41:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.497 22:41:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.497 22:41:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:41.497 ************************************ 00:10:41.497 START TEST nvmf_target_extra 00:10:41.497 ************************************ 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:41.497 * Looking for test storage... 00:10:41.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:41.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.497 --rc genhtml_branch_coverage=1 00:10:41.497 --rc genhtml_function_coverage=1 00:10:41.497 --rc genhtml_legend=1 00:10:41.497 --rc geninfo_all_blocks=1 00:10:41.497 --rc geninfo_unexecuted_blocks=1 00:10:41.497 00:10:41.497 ' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:41.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.497 --rc genhtml_branch_coverage=1 00:10:41.497 --rc genhtml_function_coverage=1 00:10:41.497 --rc genhtml_legend=1 00:10:41.497 --rc geninfo_all_blocks=1 00:10:41.497 --rc geninfo_unexecuted_blocks=1 00:10:41.497 00:10:41.497 ' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:41.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.497 --rc genhtml_branch_coverage=1 00:10:41.497 --rc genhtml_function_coverage=1 00:10:41.497 --rc genhtml_legend=1 00:10:41.497 --rc geninfo_all_blocks=1 00:10:41.497 --rc geninfo_unexecuted_blocks=1 00:10:41.497 00:10:41.497 ' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:41.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.497 --rc genhtml_branch_coverage=1 00:10:41.497 --rc genhtml_function_coverage=1 00:10:41.497 --rc genhtml_legend=1 00:10:41.497 --rc geninfo_all_blocks=1 00:10:41.497 --rc geninfo_unexecuted_blocks=1 00:10:41.497 00:10:41.497 ' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.497 22:41:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.498 ************************************ 00:10:41.498 START TEST nvmf_example 00:10:41.498 ************************************ 00:10:41.498 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:41.756 * Looking for test storage... 00:10:41.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:41.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.756 --rc genhtml_branch_coverage=1 00:10:41.756 --rc genhtml_function_coverage=1 00:10:41.756 --rc genhtml_legend=1 00:10:41.756 --rc geninfo_all_blocks=1 00:10:41.756 --rc geninfo_unexecuted_blocks=1 00:10:41.756 00:10:41.756 ' 00:10:41.756 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:41.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.757 --rc genhtml_branch_coverage=1 00:10:41.757 --rc genhtml_function_coverage=1 00:10:41.757 --rc genhtml_legend=1 00:10:41.757 --rc geninfo_all_blocks=1 00:10:41.757 --rc geninfo_unexecuted_blocks=1 00:10:41.757 00:10:41.757 ' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:41.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.757 --rc genhtml_branch_coverage=1 00:10:41.757 --rc genhtml_function_coverage=1 00:10:41.757 --rc genhtml_legend=1 00:10:41.757 --rc geninfo_all_blocks=1 00:10:41.757 --rc geninfo_unexecuted_blocks=1 00:10:41.757 00:10:41.757 ' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:41.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.757 --rc genhtml_branch_coverage=1 00:10:41.757 --rc genhtml_function_coverage=1 00:10:41.757 --rc genhtml_legend=1 00:10:41.757 --rc geninfo_all_blocks=1 00:10:41.757 --rc geninfo_unexecuted_blocks=1 00:10:41.757 00:10:41.757 ' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.757 22:41:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:44.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:44.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:44.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:44.289 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:10:44.289 00:10:44.289 --- 10.0.0.2 ping statistics --- 00:10:44.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.289 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:10:44.289 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:44.290 00:10:44.290 --- 10.0.0.1 ping statistics --- 00:10:44.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.290 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=491 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 491 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 491 ']' 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.290 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:45.221 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:57.418 Initializing NVMe Controllers 00:10:57.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:57.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:57.418 Initialization complete. Launching workers. 00:10:57.418 ======================================================== 00:10:57.418 Latency(us) 00:10:57.418 Device Information : IOPS MiB/s Average min max 00:10:57.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14651.39 57.23 4367.84 832.24 22153.48 00:10:57.418 ======================================================== 00:10:57.418 Total : 14651.39 57.23 4367.84 832.24 22153.48 00:10:57.418 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.418 rmmod nvme_tcp 00:10:57.418 rmmod nvme_fabrics 00:10:57.418 rmmod nvme_keyring 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 491 ']' 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 491 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 491 ']' 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 491 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 491 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 491' 00:10:57.418 killing process with pid 491 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 491 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 491 00:10:57.418 nvmf threads initialize successfully 00:10:57.418 bdev subsystem init successfully 00:10:57.418 created a nvmf target service 00:10:57.418 create targets's poll groups done 00:10:57.418 all subsystems of target started 00:10:57.418 nvmf target is running 00:10:57.418 all subsystems of target stopped 00:10:57.418 destroy targets's poll groups done 00:10:57.418 destroyed the nvmf target service 00:10:57.418 bdev subsystem finish successfully 00:10:57.418 nvmf threads destroy successfully 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.418 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.033 00:10:58.033 real 0m16.367s 00:10:58.033 user 0m45.331s 00:10:58.033 sys 0m3.835s 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.033 ************************************ 00:10:58.033 END TEST nvmf_example 00:10:58.033 ************************************ 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:58.033 ************************************ 00:10:58.033 START TEST nvmf_filesystem 00:10:58.033 ************************************ 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:58.033 * Looking for test storage... 00:10:58.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:58.033 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:58.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.324 --rc genhtml_branch_coverage=1 00:10:58.324 --rc genhtml_function_coverage=1 00:10:58.324 --rc genhtml_legend=1 00:10:58.324 --rc geninfo_all_blocks=1 00:10:58.324 --rc geninfo_unexecuted_blocks=1 00:10:58.324 00:10:58.324 ' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:58.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.324 --rc genhtml_branch_coverage=1 00:10:58.324 --rc genhtml_function_coverage=1 00:10:58.324 --rc genhtml_legend=1 00:10:58.324 --rc geninfo_all_blocks=1 00:10:58.324 --rc geninfo_unexecuted_blocks=1 00:10:58.324 00:10:58.324 ' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:58.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.324 --rc genhtml_branch_coverage=1 00:10:58.324 --rc genhtml_function_coverage=1 00:10:58.324 --rc genhtml_legend=1 00:10:58.324 --rc geninfo_all_blocks=1 00:10:58.324 --rc geninfo_unexecuted_blocks=1 00:10:58.324 00:10:58.324 ' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:58.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.324 --rc genhtml_branch_coverage=1 00:10:58.324 --rc genhtml_function_coverage=1 00:10:58.324 --rc genhtml_legend=1 00:10:58.324 --rc geninfo_all_blocks=1 00:10:58.324 --rc geninfo_unexecuted_blocks=1 00:10:58.324 00:10:58.324 ' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:58.324 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:58.325 #define SPDK_CONFIG_H 00:10:58.325 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:58.325 #define SPDK_CONFIG_APPS 1 00:10:58.325 #define SPDK_CONFIG_ARCH native 00:10:58.325 #undef SPDK_CONFIG_ASAN 00:10:58.325 #undef SPDK_CONFIG_AVAHI 00:10:58.325 #undef SPDK_CONFIG_CET 00:10:58.325 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:58.325 #define SPDK_CONFIG_COVERAGE 1 00:10:58.325 #define SPDK_CONFIG_CROSS_PREFIX 00:10:58.325 #undef SPDK_CONFIG_CRYPTO 00:10:58.325 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:58.325 #undef SPDK_CONFIG_CUSTOMOCF 00:10:58.325 #undef SPDK_CONFIG_DAOS 00:10:58.325 #define SPDK_CONFIG_DAOS_DIR 00:10:58.325 #define SPDK_CONFIG_DEBUG 1 00:10:58.325 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:58.325 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:58.325 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:58.325 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:58.325 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:58.325 #undef SPDK_CONFIG_DPDK_UADK 00:10:58.325 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:58.325 #define SPDK_CONFIG_EXAMPLES 1 00:10:58.325 #undef SPDK_CONFIG_FC 00:10:58.325 #define SPDK_CONFIG_FC_PATH 00:10:58.325 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:58.325 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:58.325 #define SPDK_CONFIG_FSDEV 1 00:10:58.325 #undef SPDK_CONFIG_FUSE 00:10:58.325 #undef SPDK_CONFIG_FUZZER 00:10:58.325 #define SPDK_CONFIG_FUZZER_LIB 00:10:58.325 #undef SPDK_CONFIG_GOLANG 00:10:58.325 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:58.325 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:58.325 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:58.325 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:58.325 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:58.325 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:58.325 #undef SPDK_CONFIG_HAVE_LZ4 00:10:58.325 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:58.325 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:58.325 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:58.325 #define SPDK_CONFIG_IDXD 1 00:10:58.325 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:58.325 #undef SPDK_CONFIG_IPSEC_MB 00:10:58.325 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:58.325 #define SPDK_CONFIG_ISAL 1 00:10:58.325 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:58.325 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:58.325 #define SPDK_CONFIG_LIBDIR 00:10:58.325 #undef SPDK_CONFIG_LTO 00:10:58.325 #define SPDK_CONFIG_MAX_LCORES 128 00:10:58.325 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:58.325 #define SPDK_CONFIG_NVME_CUSE 1 00:10:58.325 #undef SPDK_CONFIG_OCF 00:10:58.325 #define SPDK_CONFIG_OCF_PATH 00:10:58.325 #define SPDK_CONFIG_OPENSSL_PATH 00:10:58.325 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:58.325 #define SPDK_CONFIG_PGO_DIR 00:10:58.325 #undef SPDK_CONFIG_PGO_USE 00:10:58.325 #define SPDK_CONFIG_PREFIX /usr/local 00:10:58.325 #undef SPDK_CONFIG_RAID5F 00:10:58.325 #undef SPDK_CONFIG_RBD 00:10:58.325 #define SPDK_CONFIG_RDMA 1 00:10:58.325 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:58.325 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:58.325 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:58.325 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:58.325 #define SPDK_CONFIG_SHARED 1 00:10:58.325 #undef SPDK_CONFIG_SMA 00:10:58.325 #define SPDK_CONFIG_TESTS 1 00:10:58.325 #undef SPDK_CONFIG_TSAN 00:10:58.325 #define SPDK_CONFIG_UBLK 1 00:10:58.325 #define SPDK_CONFIG_UBSAN 1 00:10:58.325 #undef SPDK_CONFIG_UNIT_TESTS 00:10:58.325 #undef SPDK_CONFIG_URING 00:10:58.325 #define SPDK_CONFIG_URING_PATH 00:10:58.325 #undef SPDK_CONFIG_URING_ZNS 00:10:58.325 #undef SPDK_CONFIG_USDT 00:10:58.325 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:58.325 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:58.325 #define SPDK_CONFIG_VFIO_USER 1 00:10:58.325 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:58.325 #define SPDK_CONFIG_VHOST 1 00:10:58.325 #define SPDK_CONFIG_VIRTIO 1 00:10:58.325 #undef SPDK_CONFIG_VTUNE 00:10:58.325 #define SPDK_CONFIG_VTUNE_DIR 00:10:58.325 #define SPDK_CONFIG_WERROR 1 00:10:58.325 #define SPDK_CONFIG_WPDK_DIR 00:10:58.325 #undef SPDK_CONFIG_XNVME 00:10:58.325 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:58.325 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:58.326 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:58.327 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2472 ]] 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2472 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.pnhhHT 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pnhhHT/tests/target /tmp/spdk.pnhhHT 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=59522871296 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67273338880 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7750467584 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33626636288 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636667392 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13432246272 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=13454667776 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22421504 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33636167680 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636671488 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=503808 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6727319552 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6727331840 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:58.328 * Looking for test storage... 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=59522871296 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9965060096 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.328 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:58.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.329 --rc genhtml_branch_coverage=1 00:10:58.329 --rc genhtml_function_coverage=1 00:10:58.329 --rc genhtml_legend=1 00:10:58.329 --rc geninfo_all_blocks=1 00:10:58.329 --rc geninfo_unexecuted_blocks=1 00:10:58.329 00:10:58.329 ' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:58.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.329 --rc genhtml_branch_coverage=1 00:10:58.329 --rc genhtml_function_coverage=1 00:10:58.329 --rc genhtml_legend=1 00:10:58.329 --rc geninfo_all_blocks=1 00:10:58.329 --rc geninfo_unexecuted_blocks=1 00:10:58.329 00:10:58.329 ' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:58.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.329 --rc genhtml_branch_coverage=1 00:10:58.329 --rc genhtml_function_coverage=1 00:10:58.329 --rc genhtml_legend=1 00:10:58.329 --rc geninfo_all_blocks=1 00:10:58.329 --rc geninfo_unexecuted_blocks=1 00:10:58.329 00:10:58.329 ' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:58.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.329 --rc genhtml_branch_coverage=1 00:10:58.329 --rc genhtml_function_coverage=1 00:10:58.329 --rc genhtml_legend=1 00:10:58.329 --rc geninfo_all_blocks=1 00:10:58.329 --rc geninfo_unexecuted_blocks=1 00:10:58.329 00:10:58.329 ' 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.329 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.329 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.330 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.863 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.863 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.863 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.863 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.863 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:11:00.864 00:11:00.864 --- 10.0.0.2 ping statistics --- 00:11:00.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.864 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:11:00.864 00:11:00.864 --- 10.0.0.1 ping statistics --- 00:11:00.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.864 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.864 ************************************ 00:11:00.864 START TEST nvmf_filesystem_no_in_capsule 00:11:00.864 ************************************ 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4614 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4614 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4614 ']' 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.864 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.864 [2024-12-10 22:42:08.443793] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:11:00.864 [2024-12-10 22:42:08.443911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.864 [2024-12-10 22:42:08.517993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.864 [2024-12-10 22:42:08.579753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.864 [2024-12-10 22:42:08.579809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.864 [2024-12-10 22:42:08.579837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.864 [2024-12-10 22:42:08.579849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.864 [2024-12-10 22:42:08.579859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.864 [2024-12-10 22:42:08.581341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.864 [2024-12-10 22:42:08.581406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.864 [2024-12-10 22:42:08.581476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.864 [2024-12-10 22:42:08.581479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.122 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.122 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:01.122 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.122 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.122 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.122 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.122 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:01.123 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:01.123 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.123 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.123 [2024-12-10 22:42:08.728586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.123 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.123 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:01.123 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.123 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.381 Malloc1 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.381 [2024-12-10 22:42:08.923173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:01.381 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:01.382 { 00:11:01.382 "name": "Malloc1", 00:11:01.382 "aliases": [ 00:11:01.382 "32e3bd7c-46e8-4859-9830-e31e1c63109c" 00:11:01.382 ], 00:11:01.382 "product_name": "Malloc disk", 00:11:01.382 "block_size": 512, 00:11:01.382 "num_blocks": 1048576, 00:11:01.382 "uuid": "32e3bd7c-46e8-4859-9830-e31e1c63109c", 00:11:01.382 "assigned_rate_limits": { 00:11:01.382 "rw_ios_per_sec": 0, 00:11:01.382 "rw_mbytes_per_sec": 0, 00:11:01.382 "r_mbytes_per_sec": 0, 00:11:01.382 "w_mbytes_per_sec": 0 00:11:01.382 }, 00:11:01.382 "claimed": true, 00:11:01.382 "claim_type": "exclusive_write", 00:11:01.382 "zoned": false, 00:11:01.382 "supported_io_types": { 00:11:01.382 "read": true, 00:11:01.382 "write": true, 00:11:01.382 "unmap": true, 00:11:01.382 "flush": true, 00:11:01.382 "reset": true, 00:11:01.382 "nvme_admin": false, 00:11:01.382 "nvme_io": false, 00:11:01.382 "nvme_io_md": false, 00:11:01.382 "write_zeroes": true, 00:11:01.382 "zcopy": true, 00:11:01.382 "get_zone_info": false, 00:11:01.382 "zone_management": false, 00:11:01.382 "zone_append": false, 00:11:01.382 "compare": false, 00:11:01.382 "compare_and_write": false, 00:11:01.382 "abort": true, 00:11:01.382 "seek_hole": false, 00:11:01.382 "seek_data": false, 00:11:01.382 "copy": true, 00:11:01.382 "nvme_iov_md": false 00:11:01.382 }, 00:11:01.382 "memory_domains": [ 00:11:01.382 { 00:11:01.382 "dma_device_id": "system", 00:11:01.382 "dma_device_type": 1 00:11:01.382 }, 00:11:01.382 { 00:11:01.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.382 "dma_device_type": 2 00:11:01.382 } 00:11:01.382 ], 00:11:01.382 "driver_specific": {} 00:11:01.382 } 00:11:01.382 ]' 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:01.382 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:01.382 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:01.382 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:01.382 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:01.382 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:01.382 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.315 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.315 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.315 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.315 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.315 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:04.212 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:05.156 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.089 ************************************ 00:11:06.089 START TEST filesystem_ext4 00:11:06.089 ************************************ 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:06.089 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:06.089 mke2fs 1.47.0 (5-Feb-2023) 00:11:06.347 Discarding device blocks: 0/522240 done 00:11:06.347 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:06.347 Filesystem UUID: eb9a611f-e9f9-4d5d-a409-227005be6fdf 00:11:06.347 Superblock backups stored on blocks: 00:11:06.347 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:06.347 00:11:06.347 Allocating group tables: 0/64 done 00:11:06.347 Writing inode tables: 0/64 done 00:11:06.347 Creating journal (8192 blocks): done 00:11:06.347 Writing superblocks and filesystem accounting information: 0/64 done 00:11:06.347 00:11:06.347 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:06.347 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4614 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.899 00:11:12.899 real 0m5.984s 00:11:12.899 user 0m0.011s 00:11:12.899 sys 0m0.065s 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:12.899 ************************************ 00:11:12.899 END TEST filesystem_ext4 00:11:12.899 ************************************ 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.899 ************************************ 00:11:12.899 START TEST filesystem_btrfs 00:11:12.899 ************************************ 00:11:12.899 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:12.900 btrfs-progs v6.8.1 00:11:12.900 See https://btrfs.readthedocs.io for more information. 00:11:12.900 00:11:12.900 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:12.900 NOTE: several default settings have changed in version 5.15, please make sure 00:11:12.900 this does not affect your deployments: 00:11:12.900 - DUP for metadata (-m dup) 00:11:12.900 - enabled no-holes (-O no-holes) 00:11:12.900 - enabled free-space-tree (-R free-space-tree) 00:11:12.900 00:11:12.900 Label: (null) 00:11:12.900 UUID: 6f072576-04a5-42bb-890b-17013e202523 00:11:12.900 Node size: 16384 00:11:12.900 Sector size: 4096 (CPU page size: 4096) 00:11:12.900 Filesystem size: 510.00MiB 00:11:12.900 Block group profiles: 00:11:12.900 Data: single 8.00MiB 00:11:12.900 Metadata: DUP 32.00MiB 00:11:12.900 System: DUP 8.00MiB 00:11:12.900 SSD detected: yes 00:11:12.900 Zoned device: no 00:11:12.900 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:12.900 Checksum: crc32c 00:11:12.900 Number of devices: 1 00:11:12.900 Devices: 00:11:12.900 ID SIZE PATH 00:11:12.900 1 510.00MiB /dev/nvme0n1p1 00:11:12.900 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:12.900 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4614 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.900 00:11:12.900 real 0m0.828s 00:11:12.900 user 0m0.015s 00:11:12.900 sys 0m0.108s 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:12.900 ************************************ 00:11:12.900 END TEST filesystem_btrfs 00:11:12.900 ************************************ 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.900 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.158 ************************************ 00:11:13.158 START TEST filesystem_xfs 00:11:13.158 ************************************ 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:13.158 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:13.158 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:13.158 = sectsz=512 attr=2, projid32bit=1 00:11:13.158 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:13.159 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:13.159 data = bsize=4096 blocks=130560, imaxpct=25 00:11:13.159 = sunit=0 swidth=0 blks 00:11:13.159 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:13.159 log =internal log bsize=4096 blocks=16384, version=2 00:11:13.159 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:13.159 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:14.091 Discarding blocks...Done. 00:11:14.091 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:14.091 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4614 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.619 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.619 00:11:16.619 real 0m3.257s 00:11:16.619 user 0m0.013s 00:11:16.620 sys 0m0.065s 00:11:16.620 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.620 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.620 ************************************ 00:11:16.620 END TEST filesystem_xfs 00:11:16.620 ************************************ 00:11:16.620 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:16.620 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:16.620 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.620 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.620 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4614 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4614 ']' 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4614 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4614 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4614' 00:11:16.620 killing process with pid 4614 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 4614 00:11:16.620 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 4614 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:16.880 00:11:16.880 real 0m16.121s 00:11:16.880 user 1m2.464s 00:11:16.880 sys 0m1.973s 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.880 ************************************ 00:11:16.880 END TEST nvmf_filesystem_no_in_capsule 00:11:16.880 ************************************ 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.880 ************************************ 00:11:16.880 START TEST nvmf_filesystem_in_capsule 00:11:16.880 ************************************ 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=6897 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 6897 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 6897 ']' 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.880 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.139 [2024-12-10 22:42:24.618069] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:11:17.139 [2024-12-10 22:42:24.618157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.139 [2024-12-10 22:42:24.689393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.139 [2024-12-10 22:42:24.746686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.139 [2024-12-10 22:42:24.746738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.139 [2024-12-10 22:42:24.746769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.139 [2024-12-10 22:42:24.746780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.139 [2024-12-10 22:42:24.746790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.139 [2024-12-10 22:42:24.748266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.139 [2024-12-10 22:42:24.748328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.139 [2024-12-10 22:42:24.748392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.139 [2024-12-10 22:42:24.748396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.397 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.397 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:17.397 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.397 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.397 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 [2024-12-10 22:42:24.899612] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.398 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 Malloc1 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 [2024-12-10 22:42:25.095243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:17.398 { 00:11:17.398 "name": "Malloc1", 00:11:17.398 "aliases": [ 00:11:17.398 "7ffa4178-9dd9-4449-bc72-4380336e224a" 00:11:17.398 ], 00:11:17.398 "product_name": "Malloc disk", 00:11:17.398 "block_size": 512, 00:11:17.398 "num_blocks": 1048576, 00:11:17.398 "uuid": "7ffa4178-9dd9-4449-bc72-4380336e224a", 00:11:17.398 "assigned_rate_limits": { 00:11:17.398 "rw_ios_per_sec": 0, 00:11:17.398 "rw_mbytes_per_sec": 0, 00:11:17.398 "r_mbytes_per_sec": 0, 00:11:17.398 "w_mbytes_per_sec": 0 00:11:17.398 }, 00:11:17.398 "claimed": true, 00:11:17.398 "claim_type": "exclusive_write", 00:11:17.398 "zoned": false, 00:11:17.398 "supported_io_types": { 00:11:17.398 "read": true, 00:11:17.398 "write": true, 00:11:17.398 "unmap": true, 00:11:17.398 "flush": true, 00:11:17.398 "reset": true, 00:11:17.398 "nvme_admin": false, 00:11:17.398 "nvme_io": false, 00:11:17.398 "nvme_io_md": false, 00:11:17.398 "write_zeroes": true, 00:11:17.398 "zcopy": true, 00:11:17.398 "get_zone_info": false, 00:11:17.398 "zone_management": false, 00:11:17.398 "zone_append": false, 00:11:17.398 "compare": false, 00:11:17.398 "compare_and_write": false, 00:11:17.398 "abort": true, 00:11:17.398 "seek_hole": false, 00:11:17.398 "seek_data": false, 00:11:17.398 "copy": true, 00:11:17.398 "nvme_iov_md": false 00:11:17.398 }, 00:11:17.398 "memory_domains": [ 00:11:17.398 { 00:11:17.398 "dma_device_id": "system", 00:11:17.398 "dma_device_type": 1 00:11:17.398 }, 00:11:17.398 { 00:11:17.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.398 "dma_device_type": 2 00:11:17.398 } 00:11:17.398 ], 00:11:17.398 "driver_specific": {} 00:11:17.398 } 00:11:17.398 ]' 00:11:17.398 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:17.655 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:17.655 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:17.655 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:17.655 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:17.655 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:17.655 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:17.655 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.220 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.220 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.220 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.220 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.220 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:20.119 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:20.377 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:21.310 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.243 ************************************ 00:11:22.243 START TEST filesystem_in_capsule_ext4 00:11:22.243 ************************************ 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:22.243 mke2fs 1.47.0 (5-Feb-2023) 00:11:22.243 Discarding device blocks: 0/522240 done 00:11:22.243 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:22.243 Filesystem UUID: 68db1f3a-a38c-4012-b616-b2d88ccf2f9a 00:11:22.243 Superblock backups stored on blocks: 00:11:22.243 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:22.243 00:11:22.243 Allocating group tables: 0/64 done 00:11:22.243 Writing inode tables: 0/64 done 00:11:22.243 Creating journal (8192 blocks): done 00:11:22.243 Writing superblocks and filesystem accounting information: 0/64 done 00:11:22.243 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:22.243 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 6897 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.803 00:11:28.803 real 0m5.856s 00:11:28.803 user 0m0.018s 00:11:28.803 sys 0m0.062s 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:28.803 ************************************ 00:11:28.803 END TEST filesystem_in_capsule_ext4 00:11:28.803 ************************************ 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.803 ************************************ 00:11:28.803 START TEST filesystem_in_capsule_btrfs 00:11:28.803 ************************************ 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:28.803 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:28.804 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:28.804 btrfs-progs v6.8.1 00:11:28.804 See https://btrfs.readthedocs.io for more information. 00:11:28.804 00:11:28.804 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:28.804 NOTE: several default settings have changed in version 5.15, please make sure 00:11:28.804 this does not affect your deployments: 00:11:28.804 - DUP for metadata (-m dup) 00:11:28.804 - enabled no-holes (-O no-holes) 00:11:28.804 - enabled free-space-tree (-R free-space-tree) 00:11:28.804 00:11:28.804 Label: (null) 00:11:28.804 UUID: adae4964-a6cc-41aa-a907-5af0c011b5ed 00:11:28.804 Node size: 16384 00:11:28.804 Sector size: 4096 (CPU page size: 4096) 00:11:28.804 Filesystem size: 510.00MiB 00:11:28.804 Block group profiles: 00:11:28.804 Data: single 8.00MiB 00:11:28.804 Metadata: DUP 32.00MiB 00:11:28.804 System: DUP 8.00MiB 00:11:28.804 SSD detected: yes 00:11:28.804 Zoned device: no 00:11:28.804 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:28.804 Checksum: crc32c 00:11:28.804 Number of devices: 1 00:11:28.804 Devices: 00:11:28.804 ID SIZE PATH 00:11:28.804 1 510.00MiB /dev/nvme0n1p1 00:11:28.804 00:11:28.804 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:28.804 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 6897 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.062 00:11:29.062 real 0m0.989s 00:11:29.062 user 0m0.017s 00:11:29.062 sys 0m0.101s 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.062 ************************************ 00:11:29.062 END TEST filesystem_in_capsule_btrfs 00:11:29.062 ************************************ 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.062 ************************************ 00:11:29.062 START TEST filesystem_in_capsule_xfs 00:11:29.062 ************************************ 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.062 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:29.062 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:29.062 = sectsz=512 attr=2, projid32bit=1 00:11:29.062 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:29.062 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:29.062 data = bsize=4096 blocks=130560, imaxpct=25 00:11:29.062 = sunit=0 swidth=0 blks 00:11:29.062 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:29.062 log =internal log bsize=4096 blocks=16384, version=2 00:11:29.062 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:29.062 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:29.995 Discarding blocks...Done. 00:11:29.995 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:29.995 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 6897 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.893 00:11:31.893 real 0m2.743s 00:11:31.893 user 0m0.011s 00:11:31.893 sys 0m0.064s 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.893 ************************************ 00:11:31.893 END TEST filesystem_in_capsule_xfs 00:11:31.893 ************************************ 00:11:31.893 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 6897 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 6897 ']' 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 6897 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 6897 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 6897' 00:11:32.151 killing process with pid 6897 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 6897 00:11:32.151 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 6897 00:11:32.716 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:32.717 00:11:32.717 real 0m15.737s 00:11:32.717 user 1m0.892s 00:11:32.717 sys 0m2.013s 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.717 ************************************ 00:11:32.717 END TEST nvmf_filesystem_in_capsule 00:11:32.717 ************************************ 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.717 rmmod nvme_tcp 00:11:32.717 rmmod nvme_fabrics 00:11:32.717 rmmod nvme_keyring 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.717 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.310 00:11:35.310 real 0m36.830s 00:11:35.310 user 2m4.473s 00:11:35.310 sys 0m5.833s 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.310 ************************************ 00:11:35.310 END TEST nvmf_filesystem 00:11:35.310 ************************************ 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.310 ************************************ 00:11:35.310 START TEST nvmf_target_discovery 00:11:35.310 ************************************ 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.310 * Looking for test storage... 00:11:35.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.310 --rc genhtml_branch_coverage=1 00:11:35.310 --rc genhtml_function_coverage=1 00:11:35.310 --rc genhtml_legend=1 00:11:35.310 --rc geninfo_all_blocks=1 00:11:35.310 --rc geninfo_unexecuted_blocks=1 00:11:35.310 00:11:35.310 ' 00:11:35.310 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.310 --rc genhtml_branch_coverage=1 00:11:35.310 --rc genhtml_function_coverage=1 00:11:35.310 --rc genhtml_legend=1 00:11:35.310 --rc geninfo_all_blocks=1 00:11:35.311 --rc geninfo_unexecuted_blocks=1 00:11:35.311 00:11:35.311 ' 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.311 --rc genhtml_branch_coverage=1 00:11:35.311 --rc genhtml_function_coverage=1 00:11:35.311 --rc genhtml_legend=1 00:11:35.311 --rc geninfo_all_blocks=1 00:11:35.311 --rc geninfo_unexecuted_blocks=1 00:11:35.311 00:11:35.311 ' 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.311 --rc genhtml_branch_coverage=1 00:11:35.311 --rc genhtml_function_coverage=1 00:11:35.311 --rc genhtml_legend=1 00:11:35.311 --rc geninfo_all_blocks=1 00:11:35.311 --rc geninfo_unexecuted_blocks=1 00:11:35.311 00:11:35.311 ' 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.311 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.312 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:37.217 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:37.217 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:37.217 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:37.217 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.217 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.218 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.218 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.218 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:11:37.477 00:11:37.477 --- 10.0.0.2 ping statistics --- 00:11:37.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.477 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:11:37.477 00:11:37.477 --- 10.0.0.1 ping statistics --- 00:11:37.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.477 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.477 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=10922 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 10922 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 10922 ']' 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.477 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.477 [2024-12-10 22:42:45.059300] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:11:37.477 [2024-12-10 22:42:45.059401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.477 [2024-12-10 22:42:45.131252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.477 [2024-12-10 22:42:45.186333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.477 [2024-12-10 22:42:45.186383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.477 [2024-12-10 22:42:45.186405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.477 [2024-12-10 22:42:45.186416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.477 [2024-12-10 22:42:45.186427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.477 [2024-12-10 22:42:45.187858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.477 [2024-12-10 22:42:45.187944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.477 [2024-12-10 22:42:45.188055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.477 [2024-12-10 22:42:45.188063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 [2024-12-10 22:42:45.341531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 Null1 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 [2024-12-10 22:42:45.391740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 Null2 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.735 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.736 Null3 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.736 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.994 Null4 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:37.994 00:11:37.994 Discovery Log Number of Records 6, Generation counter 6 00:11:37.994 =====Discovery Log Entry 0====== 00:11:37.994 trtype: tcp 00:11:37.994 adrfam: ipv4 00:11:37.994 subtype: current discovery subsystem 00:11:37.994 treq: not required 00:11:37.994 portid: 0 00:11:37.994 trsvcid: 4420 00:11:37.994 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:37.994 traddr: 10.0.0.2 00:11:37.994 eflags: explicit discovery connections, duplicate discovery information 00:11:37.994 sectype: none 00:11:37.994 =====Discovery Log Entry 1====== 00:11:37.994 trtype: tcp 00:11:37.994 adrfam: ipv4 00:11:37.994 subtype: nvme subsystem 00:11:37.994 treq: not required 00:11:37.994 portid: 0 00:11:37.994 trsvcid: 4420 00:11:37.994 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:37.994 traddr: 10.0.0.2 00:11:37.994 eflags: none 00:11:37.994 sectype: none 00:11:37.994 =====Discovery Log Entry 2====== 00:11:37.994 trtype: tcp 00:11:37.994 adrfam: ipv4 00:11:37.994 subtype: nvme subsystem 00:11:37.994 treq: not required 00:11:37.994 portid: 0 00:11:37.994 trsvcid: 4420 00:11:37.994 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:37.994 traddr: 10.0.0.2 00:11:37.994 eflags: none 00:11:37.994 sectype: none 00:11:37.994 =====Discovery Log Entry 3====== 00:11:37.994 trtype: tcp 00:11:37.994 adrfam: ipv4 00:11:37.994 subtype: nvme subsystem 00:11:37.994 treq: not required 00:11:37.994 portid: 0 00:11:37.994 trsvcid: 4420 00:11:37.994 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:37.994 traddr: 10.0.0.2 00:11:37.994 eflags: none 00:11:37.994 sectype: none 00:11:37.994 =====Discovery Log Entry 4====== 00:11:37.994 trtype: tcp 00:11:37.994 adrfam: ipv4 00:11:37.994 subtype: nvme subsystem 00:11:37.994 treq: not required 00:11:37.994 portid: 0 00:11:37.994 trsvcid: 4420 00:11:37.994 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:37.994 traddr: 10.0.0.2 00:11:37.994 eflags: none 00:11:37.994 sectype: none 00:11:37.994 =====Discovery Log Entry 5====== 00:11:37.994 trtype: tcp 00:11:37.994 adrfam: ipv4 00:11:37.994 subtype: discovery subsystem referral 00:11:37.994 treq: not required 00:11:37.994 portid: 0 00:11:37.994 trsvcid: 4430 00:11:37.994 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:37.994 traddr: 10.0.0.2 00:11:37.994 eflags: none 00:11:37.994 sectype: none 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:37.994 Perform nvmf subsystem discovery via RPC 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.994 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.994 [ 00:11:37.994 { 00:11:37.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:37.994 "subtype": "Discovery", 00:11:37.994 "listen_addresses": [ 00:11:37.994 { 00:11:37.994 "trtype": "TCP", 00:11:37.994 "adrfam": "IPv4", 00:11:37.994 "traddr": "10.0.0.2", 00:11:37.994 "trsvcid": "4420" 00:11:37.994 } 00:11:37.994 ], 00:11:37.994 "allow_any_host": true, 00:11:37.994 "hosts": [] 00:11:37.994 }, 00:11:37.994 { 00:11:37.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.994 "subtype": "NVMe", 00:11:37.994 "listen_addresses": [ 00:11:37.994 { 00:11:37.994 "trtype": "TCP", 00:11:37.994 "adrfam": "IPv4", 00:11:37.994 "traddr": "10.0.0.2", 00:11:37.994 "trsvcid": "4420" 00:11:37.994 } 00:11:37.994 ], 00:11:37.994 "allow_any_host": true, 00:11:37.994 "hosts": [], 00:11:37.994 "serial_number": "SPDK00000000000001", 00:11:37.994 "model_number": "SPDK bdev Controller", 00:11:37.994 "max_namespaces": 32, 00:11:37.994 "min_cntlid": 1, 00:11:37.994 "max_cntlid": 65519, 00:11:37.994 "namespaces": [ 00:11:37.994 { 00:11:37.994 "nsid": 1, 00:11:37.994 "bdev_name": "Null1", 00:11:37.994 "name": "Null1", 00:11:37.994 "nguid": "E967C9475D484F7199F25E079854FDBD", 00:11:37.994 "uuid": "e967c947-5d48-4f71-99f2-5e079854fdbd" 00:11:37.994 } 00:11:37.994 ] 00:11:37.994 }, 00:11:37.994 { 00:11:37.994 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:37.994 "subtype": "NVMe", 00:11:37.994 "listen_addresses": [ 00:11:37.994 { 00:11:37.994 "trtype": "TCP", 00:11:37.994 "adrfam": "IPv4", 00:11:37.994 "traddr": "10.0.0.2", 00:11:37.994 "trsvcid": "4420" 00:11:37.994 } 00:11:37.994 ], 00:11:37.994 "allow_any_host": true, 00:11:37.994 "hosts": [], 00:11:37.994 "serial_number": "SPDK00000000000002", 00:11:37.994 "model_number": "SPDK bdev Controller", 00:11:37.994 "max_namespaces": 32, 00:11:37.994 "min_cntlid": 1, 00:11:37.994 "max_cntlid": 65519, 00:11:37.994 "namespaces": [ 00:11:37.994 { 00:11:37.994 "nsid": 1, 00:11:37.994 "bdev_name": "Null2", 00:11:37.994 "name": "Null2", 00:11:37.994 "nguid": "F5199D8392C84F8386F04F4978949CF5", 00:11:37.994 "uuid": "f5199d83-92c8-4f83-86f0-4f4978949cf5" 00:11:37.994 } 00:11:37.994 ] 00:11:37.994 }, 00:11:37.994 { 00:11:37.994 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:37.994 "subtype": "NVMe", 00:11:37.994 "listen_addresses": [ 00:11:37.994 { 00:11:37.994 "trtype": "TCP", 00:11:37.994 "adrfam": "IPv4", 00:11:37.994 "traddr": "10.0.0.2", 00:11:37.994 "trsvcid": "4420" 00:11:37.994 } 00:11:37.994 ], 00:11:37.994 "allow_any_host": true, 00:11:37.995 "hosts": [], 00:11:37.995 "serial_number": "SPDK00000000000003", 00:11:37.995 "model_number": "SPDK bdev Controller", 00:11:37.995 "max_namespaces": 32, 00:11:37.995 "min_cntlid": 1, 00:11:37.995 "max_cntlid": 65519, 00:11:37.995 "namespaces": [ 00:11:37.995 { 00:11:37.995 "nsid": 1, 00:11:37.995 "bdev_name": "Null3", 00:11:37.995 "name": "Null3", 00:11:37.995 "nguid": "5ECEB636C44F4DE49AF56C59463BC669", 00:11:37.995 "uuid": "5eceb636-c44f-4de4-9af5-6c59463bc669" 00:11:37.995 } 00:11:37.995 ] 00:11:37.995 }, 00:11:37.995 { 00:11:37.995 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:37.995 "subtype": "NVMe", 00:11:37.995 "listen_addresses": [ 00:11:37.995 { 00:11:37.995 "trtype": "TCP", 00:11:37.995 "adrfam": "IPv4", 00:11:37.995 "traddr": "10.0.0.2", 00:11:37.995 "trsvcid": "4420" 00:11:37.995 } 00:11:37.995 ], 00:11:37.995 "allow_any_host": true, 00:11:37.995 "hosts": [], 00:11:37.995 "serial_number": "SPDK00000000000004", 00:11:37.995 "model_number": "SPDK bdev Controller", 00:11:37.995 "max_namespaces": 32, 00:11:37.995 "min_cntlid": 1, 00:11:37.995 "max_cntlid": 65519, 00:11:37.995 "namespaces": [ 00:11:37.995 { 00:11:37.995 "nsid": 1, 00:11:37.995 "bdev_name": "Null4", 00:11:37.995 "name": "Null4", 00:11:37.995 "nguid": "7612D262578A4C7798FDF0FDDF6EDF35", 00:11:37.995 "uuid": "7612d262-578a-4c77-98fd-f0fddf6edf35" 00:11:37.995 } 00:11:37.995 ] 00:11:37.995 } 00:11:37.995 ] 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.995 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.254 rmmod nvme_tcp 00:11:38.254 rmmod nvme_fabrics 00:11:38.254 rmmod nvme_keyring 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 10922 ']' 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 10922 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 10922 ']' 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 10922 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 10922 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 10922' 00:11:38.254 killing process with pid 10922 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 10922 00:11:38.254 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 10922 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.514 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:41.057 00:11:41.057 real 0m5.664s 00:11:41.057 user 0m4.687s 00:11:41.057 sys 0m1.946s 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.057 ************************************ 00:11:41.057 END TEST nvmf_target_discovery 00:11:41.057 ************************************ 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.057 ************************************ 00:11:41.057 START TEST nvmf_referrals 00:11:41.057 ************************************ 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:41.057 * Looking for test storage... 00:11:41.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.057 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:41.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.058 --rc genhtml_branch_coverage=1 00:11:41.058 --rc genhtml_function_coverage=1 00:11:41.058 --rc genhtml_legend=1 00:11:41.058 --rc geninfo_all_blocks=1 00:11:41.058 --rc geninfo_unexecuted_blocks=1 00:11:41.058 00:11:41.058 ' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:41.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.058 --rc genhtml_branch_coverage=1 00:11:41.058 --rc genhtml_function_coverage=1 00:11:41.058 --rc genhtml_legend=1 00:11:41.058 --rc geninfo_all_blocks=1 00:11:41.058 --rc geninfo_unexecuted_blocks=1 00:11:41.058 00:11:41.058 ' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:41.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.058 --rc genhtml_branch_coverage=1 00:11:41.058 --rc genhtml_function_coverage=1 00:11:41.058 --rc genhtml_legend=1 00:11:41.058 --rc geninfo_all_blocks=1 00:11:41.058 --rc geninfo_unexecuted_blocks=1 00:11:41.058 00:11:41.058 ' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:41.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.058 --rc genhtml_branch_coverage=1 00:11:41.058 --rc genhtml_function_coverage=1 00:11:41.058 --rc genhtml_legend=1 00:11:41.058 --rc geninfo_all_blocks=1 00:11:41.058 --rc geninfo_unexecuted_blocks=1 00:11:41.058 00:11:41.058 ' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.058 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.059 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.059 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.059 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.059 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.059 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.059 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.964 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:42.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:42.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:42.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:42.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.965 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:11:43.224 00:11:43.224 --- 10.0.0.2 ping statistics --- 00:11:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.224 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:11:43.224 00:11:43.224 --- 10.0.0.1 ping statistics --- 00:11:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.224 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=13022 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 13022 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 13022 ']' 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.224 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.224 [2024-12-10 22:42:50.856636] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:11:43.224 [2024-12-10 22:42:50.856732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.224 [2024-12-10 22:42:50.929628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.482 [2024-12-10 22:42:50.986653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.482 [2024-12-10 22:42:50.986708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.482 [2024-12-10 22:42:50.986721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.482 [2024-12-10 22:42:50.986732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.482 [2024-12-10 22:42:50.986741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.482 [2024-12-10 22:42:50.988163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.482 [2024-12-10 22:42:50.988273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.482 [2024-12-10 22:42:50.988369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.482 [2024-12-10 22:42:50.988373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.482 [2024-12-10 22:42:51.131336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.482 [2024-12-10 22:42:51.162752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.482 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.483 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.740 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:43.740 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:43.740 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:43.740 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.740 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:43.740 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.741 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.999 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.257 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:44.516 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:44.516 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:44.516 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:44.516 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:44.516 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.516 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.774 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.031 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:45.031 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:45.031 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:45.031 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:45.031 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:45.031 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.031 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.290 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.547 rmmod nvme_tcp 00:11:45.547 rmmod nvme_fabrics 00:11:45.547 rmmod nvme_keyring 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 13022 ']' 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 13022 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 13022 ']' 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 13022 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 13022 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 13022' 00:11:45.547 killing process with pid 13022 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 13022 00:11:45.547 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 13022 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.806 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.346 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.346 00:11:48.346 real 0m7.338s 00:11:48.346 user 0m11.493s 00:11:48.346 sys 0m2.431s 00:11:48.346 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.346 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.346 ************************************ 00:11:48.346 END TEST nvmf_referrals 00:11:48.346 ************************************ 00:11:48.346 22:42:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.346 22:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.346 22:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.346 22:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.346 ************************************ 00:11:48.346 START TEST nvmf_connect_disconnect 00:11:48.346 ************************************ 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.347 * Looking for test storage... 00:11:48.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.347 --rc genhtml_branch_coverage=1 00:11:48.347 --rc genhtml_function_coverage=1 00:11:48.347 --rc genhtml_legend=1 00:11:48.347 --rc geninfo_all_blocks=1 00:11:48.347 --rc geninfo_unexecuted_blocks=1 00:11:48.347 00:11:48.347 ' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.347 --rc genhtml_branch_coverage=1 00:11:48.347 --rc genhtml_function_coverage=1 00:11:48.347 --rc genhtml_legend=1 00:11:48.347 --rc geninfo_all_blocks=1 00:11:48.347 --rc geninfo_unexecuted_blocks=1 00:11:48.347 00:11:48.347 ' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.347 --rc genhtml_branch_coverage=1 00:11:48.347 --rc genhtml_function_coverage=1 00:11:48.347 --rc genhtml_legend=1 00:11:48.347 --rc geninfo_all_blocks=1 00:11:48.347 --rc geninfo_unexecuted_blocks=1 00:11:48.347 00:11:48.347 ' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.347 --rc genhtml_branch_coverage=1 00:11:48.347 --rc genhtml_function_coverage=1 00:11:48.347 --rc genhtml_legend=1 00:11:48.347 --rc geninfo_all_blocks=1 00:11:48.347 --rc geninfo_unexecuted_blocks=1 00:11:48.347 00:11:48.347 ' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.347 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.348 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.880 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.881 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:11:50.881 00:11:50.881 --- 10.0.0.2 ping statistics --- 00:11:50.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.881 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:11:50.881 00:11:50.881 --- 10.0.0.1 ping statistics --- 00:11:50.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.881 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=15327 00:11:50.881 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 15327 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 15327 ']' 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 [2024-12-10 22:42:58.211059] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:11:50.882 [2024-12-10 22:42:58.211124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.882 [2024-12-10 22:42:58.280674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.882 [2024-12-10 22:42:58.336699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.882 [2024-12-10 22:42:58.336757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.882 [2024-12-10 22:42:58.336780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.882 [2024-12-10 22:42:58.336790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.882 [2024-12-10 22:42:58.336799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.882 [2024-12-10 22:42:58.338343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.882 [2024-12-10 22:42:58.338469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.882 [2024-12-10 22:42:58.338538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.882 [2024-12-10 22:42:58.338538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 [2024-12-10 22:42:58.487449] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 [2024-12-10 22:42:58.549330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:50.882 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:54.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.020 rmmod nvme_tcp 00:12:05.020 rmmod nvme_fabrics 00:12:05.020 rmmod nvme_keyring 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 15327 ']' 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 15327 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 15327 ']' 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 15327 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 15327 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 15327' 00:12:05.020 killing process with pid 15327 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 15327 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 15327 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.020 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.559 00:12:07.559 real 0m19.166s 00:12:07.559 user 0m57.233s 00:12:07.559 sys 0m3.457s 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.559 ************************************ 00:12:07.559 END TEST nvmf_connect_disconnect 00:12:07.559 ************************************ 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.559 ************************************ 00:12:07.559 START TEST nvmf_multitarget 00:12:07.559 ************************************ 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:07.559 * Looking for test storage... 00:12:07.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.559 --rc genhtml_branch_coverage=1 00:12:07.559 --rc genhtml_function_coverage=1 00:12:07.559 --rc genhtml_legend=1 00:12:07.559 --rc geninfo_all_blocks=1 00:12:07.559 --rc geninfo_unexecuted_blocks=1 00:12:07.559 00:12:07.559 ' 00:12:07.559 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.559 --rc genhtml_branch_coverage=1 00:12:07.559 --rc genhtml_function_coverage=1 00:12:07.560 --rc genhtml_legend=1 00:12:07.560 --rc geninfo_all_blocks=1 00:12:07.560 --rc geninfo_unexecuted_blocks=1 00:12:07.560 00:12:07.560 ' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:07.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.560 --rc genhtml_branch_coverage=1 00:12:07.560 --rc genhtml_function_coverage=1 00:12:07.560 --rc genhtml_legend=1 00:12:07.560 --rc geninfo_all_blocks=1 00:12:07.560 --rc geninfo_unexecuted_blocks=1 00:12:07.560 00:12:07.560 ' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:07.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.560 --rc genhtml_branch_coverage=1 00:12:07.560 --rc genhtml_function_coverage=1 00:12:07.560 --rc genhtml_legend=1 00:12:07.560 --rc geninfo_all_blocks=1 00:12:07.560 --rc geninfo_unexecuted_blocks=1 00:12:07.560 00:12:07.560 ' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.560 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:09.525 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.525 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:09.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:09.526 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:09.526 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.526 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:12:09.785 00:12:09.785 --- 10.0.0.2 ping statistics --- 00:12:09.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.785 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:12:09.785 00:12:09.785 --- 10.0.0.1 ping statistics --- 00:12:09.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.785 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=19095 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 19095 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 19095 ']' 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.785 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:09.785 [2024-12-10 22:43:17.420968] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:12:09.785 [2024-12-10 22:43:17.421076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.785 [2024-12-10 22:43:17.497479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.043 [2024-12-10 22:43:17.556127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.043 [2024-12-10 22:43:17.556191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.043 [2024-12-10 22:43:17.556214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.043 [2024-12-10 22:43:17.556225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.043 [2024-12-10 22:43:17.556234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.043 [2024-12-10 22:43:17.557749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.044 [2024-12-10 22:43:17.557773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.044 [2024-12-10 22:43:17.557832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.044 [2024-12-10 22:43:17.557836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.044 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:10.302 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:10.302 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:10.302 "nvmf_tgt_1" 00:12:10.302 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:10.560 "nvmf_tgt_2" 00:12:10.560 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.560 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:10.560 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:10.560 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:10.560 true 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:10.819 true 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.819 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:10.820 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.820 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.820 rmmod nvme_tcp 00:12:11.078 rmmod nvme_fabrics 00:12:11.078 rmmod nvme_keyring 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 19095 ']' 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 19095 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 19095 ']' 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 19095 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 19095 00:12:11.078 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.079 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.079 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 19095' 00:12:11.079 killing process with pid 19095 00:12:11.079 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 19095 00:12:11.079 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 19095 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.337 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.246 00:12:13.246 real 0m6.076s 00:12:13.246 user 0m6.833s 00:12:13.246 sys 0m2.112s 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.246 ************************************ 00:12:13.246 END TEST nvmf_multitarget 00:12:13.246 ************************************ 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.246 ************************************ 00:12:13.246 START TEST nvmf_rpc 00:12:13.246 ************************************ 00:12:13.246 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:13.505 * Looking for test storage... 00:12:13.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:13.505 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:13.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.506 --rc genhtml_branch_coverage=1 00:12:13.506 --rc genhtml_function_coverage=1 00:12:13.506 --rc genhtml_legend=1 00:12:13.506 --rc geninfo_all_blocks=1 00:12:13.506 --rc geninfo_unexecuted_blocks=1 00:12:13.506 00:12:13.506 ' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:13.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.506 --rc genhtml_branch_coverage=1 00:12:13.506 --rc genhtml_function_coverage=1 00:12:13.506 --rc genhtml_legend=1 00:12:13.506 --rc geninfo_all_blocks=1 00:12:13.506 --rc geninfo_unexecuted_blocks=1 00:12:13.506 00:12:13.506 ' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:13.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.506 --rc genhtml_branch_coverage=1 00:12:13.506 --rc genhtml_function_coverage=1 00:12:13.506 --rc genhtml_legend=1 00:12:13.506 --rc geninfo_all_blocks=1 00:12:13.506 --rc geninfo_unexecuted_blocks=1 00:12:13.506 00:12:13.506 ' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:13.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.506 --rc genhtml_branch_coverage=1 00:12:13.506 --rc genhtml_function_coverage=1 00:12:13.506 --rc genhtml_legend=1 00:12:13.506 --rc geninfo_all_blocks=1 00:12:13.506 --rc geninfo_unexecuted_blocks=1 00:12:13.506 00:12:13.506 ' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.506 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:16.042 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:16.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:16.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:16.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:12:16.042 00:12:16.042 --- 10.0.0.2 ping statistics --- 00:12:16.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.042 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:12:16.042 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:12:16.043 00:12:16.043 --- 10.0.0.1 ping statistics --- 00:12:16.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.043 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=21211 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 21211 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 21211 ']' 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.043 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 [2024-12-10 22:43:23.532401] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:12:16.043 [2024-12-10 22:43:23.532480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.043 [2024-12-10 22:43:23.605410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.043 [2024-12-10 22:43:23.664063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.043 [2024-12-10 22:43:23.664133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.043 [2024-12-10 22:43:23.664145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.043 [2024-12-10 22:43:23.664156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.043 [2024-12-10 22:43:23.664164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.043 [2024-12-10 22:43:23.666099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.043 [2024-12-10 22:43:23.666181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.043 [2024-12-10 22:43:23.666267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.043 [2024-12-10 22:43:23.666272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.300 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.300 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:16.300 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.300 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.300 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.300 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.300 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:16.301 "tick_rate": 2700000000, 00:12:16.301 "poll_groups": [ 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_000", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [] 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_001", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [] 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_002", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [] 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_003", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [] 00:12:16.301 } 00:12:16.301 ] 00:12:16.301 }' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.301 [2024-12-10 22:43:23.940409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:16.301 "tick_rate": 2700000000, 00:12:16.301 "poll_groups": [ 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_000", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [ 00:12:16.301 { 00:12:16.301 "trtype": "TCP" 00:12:16.301 } 00:12:16.301 ] 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_001", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [ 00:12:16.301 { 00:12:16.301 "trtype": "TCP" 00:12:16.301 } 00:12:16.301 ] 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_002", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [ 00:12:16.301 { 00:12:16.301 "trtype": "TCP" 00:12:16.301 } 00:12:16.301 ] 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "nvmf_tgt_poll_group_003", 00:12:16.301 "admin_qpairs": 0, 00:12:16.301 "io_qpairs": 0, 00:12:16.301 "current_admin_qpairs": 0, 00:12:16.301 "current_io_qpairs": 0, 00:12:16.301 "pending_bdev_io": 0, 00:12:16.301 "completed_nvme_io": 0, 00:12:16.301 "transports": [ 00:12:16.301 { 00:12:16.301 "trtype": "TCP" 00:12:16.301 } 00:12:16.301 ] 00:12:16.301 } 00:12:16.301 ] 00:12:16.301 }' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.301 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.301 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:16.301 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.301 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.301 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.301 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.560 Malloc1 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.560 [2024-12-10 22:43:24.125100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:16.560 [2024-12-10 22:43:24.147662] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:16.560 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:16.560 could not add new controller: failed to write to nvme-fabrics device 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.560 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.494 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.494 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.494 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.494 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.494 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:19.390 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.391 [2024-12-10 22:43:26.983677] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:19.391 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:19.391 could not add new controller: failed to write to nvme-fabrics device 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.391 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.956 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.956 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.956 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.956 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.956 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.483 [2024-12-10 22:43:29.728200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.483 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.741 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.741 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.741 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.741 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.741 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:24.639 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:24.639 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:24.639 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.639 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:24.639 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.639 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:24.639 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:24.897 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.898 [2024-12-10 22:43:32.446132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.898 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.463 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.463 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:25.463 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.463 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:25.463 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:27.362 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:27.362 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:27.362 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.362 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:27.362 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.362 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:27.362 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.621 [2024-12-10 22:43:35.171323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.621 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.186 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.186 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.187 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.187 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.187 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 [2024-12-10 22:43:37.993369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.712 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.712 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.712 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.712 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.712 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.970 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.970 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.970 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.970 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.970 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 [2024-12-10 22:43:40.860048] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.495 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.060 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.060 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.060 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.060 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.060 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.958 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 [2024-12-10 22:43:43.742369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 [2024-12-10 22:43:43.790409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.216 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 [2024-12-10 22:43:43.838581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 [2024-12-10 22:43:43.886743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.217 [2024-12-10 22:43:43.934915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.217 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:36.476 "tick_rate": 2700000000, 00:12:36.476 "poll_groups": [ 00:12:36.476 { 00:12:36.476 "name": "nvmf_tgt_poll_group_000", 00:12:36.476 "admin_qpairs": 2, 00:12:36.476 "io_qpairs": 84, 00:12:36.476 "current_admin_qpairs": 0, 00:12:36.476 "current_io_qpairs": 0, 00:12:36.476 "pending_bdev_io": 0, 00:12:36.476 "completed_nvme_io": 138, 00:12:36.476 "transports": [ 00:12:36.476 { 00:12:36.476 "trtype": "TCP" 00:12:36.476 } 00:12:36.476 ] 00:12:36.476 }, 00:12:36.476 { 00:12:36.476 "name": "nvmf_tgt_poll_group_001", 00:12:36.476 "admin_qpairs": 2, 00:12:36.476 "io_qpairs": 84, 00:12:36.476 "current_admin_qpairs": 0, 00:12:36.476 "current_io_qpairs": 0, 00:12:36.476 "pending_bdev_io": 0, 00:12:36.476 "completed_nvme_io": 183, 00:12:36.476 "transports": [ 00:12:36.476 { 00:12:36.476 "trtype": "TCP" 00:12:36.476 } 00:12:36.476 ] 00:12:36.476 }, 00:12:36.476 { 00:12:36.476 "name": "nvmf_tgt_poll_group_002", 00:12:36.476 "admin_qpairs": 1, 00:12:36.476 "io_qpairs": 84, 00:12:36.476 "current_admin_qpairs": 0, 00:12:36.476 "current_io_qpairs": 0, 00:12:36.476 "pending_bdev_io": 0, 00:12:36.476 "completed_nvme_io": 137, 00:12:36.476 "transports": [ 00:12:36.476 { 00:12:36.476 "trtype": "TCP" 00:12:36.476 } 00:12:36.476 ] 00:12:36.476 }, 00:12:36.476 { 00:12:36.476 "name": "nvmf_tgt_poll_group_003", 00:12:36.476 "admin_qpairs": 2, 00:12:36.476 "io_qpairs": 84, 00:12:36.476 "current_admin_qpairs": 0, 00:12:36.476 "current_io_qpairs": 0, 00:12:36.476 "pending_bdev_io": 0, 00:12:36.476 "completed_nvme_io": 228, 00:12:36.476 "transports": [ 00:12:36.476 { 00:12:36.476 "trtype": "TCP" 00:12:36.476 } 00:12:36.476 ] 00:12:36.476 } 00:12:36.476 ] 00:12:36.476 }' 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:36.476 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.476 rmmod nvme_tcp 00:12:36.476 rmmod nvme_fabrics 00:12:36.476 rmmod nvme_keyring 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 21211 ']' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 21211 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 21211 ']' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 21211 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 21211 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 21211' 00:12:36.476 killing process with pid 21211 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 21211 00:12:36.476 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 21211 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.735 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.272 00:12:39.272 real 0m25.516s 00:12:39.272 user 1m22.395s 00:12:39.272 sys 0m4.294s 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.272 ************************************ 00:12:39.272 END TEST nvmf_rpc 00:12:39.272 ************************************ 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.272 ************************************ 00:12:39.272 START TEST nvmf_invalid 00:12:39.272 ************************************ 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:39.272 * Looking for test storage... 00:12:39.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:39.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.272 --rc genhtml_branch_coverage=1 00:12:39.272 --rc genhtml_function_coverage=1 00:12:39.272 --rc genhtml_legend=1 00:12:39.272 --rc geninfo_all_blocks=1 00:12:39.272 --rc geninfo_unexecuted_blocks=1 00:12:39.272 00:12:39.272 ' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:39.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.272 --rc genhtml_branch_coverage=1 00:12:39.272 --rc genhtml_function_coverage=1 00:12:39.272 --rc genhtml_legend=1 00:12:39.272 --rc geninfo_all_blocks=1 00:12:39.272 --rc geninfo_unexecuted_blocks=1 00:12:39.272 00:12:39.272 ' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:39.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.272 --rc genhtml_branch_coverage=1 00:12:39.272 --rc genhtml_function_coverage=1 00:12:39.272 --rc genhtml_legend=1 00:12:39.272 --rc geninfo_all_blocks=1 00:12:39.272 --rc geninfo_unexecuted_blocks=1 00:12:39.272 00:12:39.272 ' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:39.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.272 --rc genhtml_branch_coverage=1 00:12:39.272 --rc genhtml_function_coverage=1 00:12:39.272 --rc genhtml_legend=1 00:12:39.272 --rc geninfo_all_blocks=1 00:12:39.272 --rc geninfo_unexecuted_blocks=1 00:12:39.272 00:12:39.272 ' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.272 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.273 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.239 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:12:41.240 00:12:41.240 --- 10.0.0.2 ping statistics --- 00:12:41.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.240 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:41.240 00:12:41.240 --- 10.0.0.1 ping statistics --- 00:12:41.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.240 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=25851 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 25851 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 25851 ']' 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.240 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:41.499 [2024-12-10 22:43:49.009948] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:12:41.499 [2024-12-10 22:43:49.010025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.499 [2024-12-10 22:43:49.079763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.499 [2024-12-10 22:43:49.134322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.499 [2024-12-10 22:43:49.134381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.499 [2024-12-10 22:43:49.134408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.499 [2024-12-10 22:43:49.134418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.499 [2024-12-10 22:43:49.134428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.499 [2024-12-10 22:43:49.136115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.499 [2024-12-10 22:43:49.136172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.499 [2024-12-10 22:43:49.136240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.499 [2024-12-10 22:43:49.136243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.756 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.756 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:41.756 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.756 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.756 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:41.756 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.757 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:41.757 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23348 00:12:42.016 [2024-12-10 22:43:49.623524] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:42.016 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:42.016 { 00:12:42.016 "nqn": "nqn.2016-06.io.spdk:cnode23348", 00:12:42.016 "tgt_name": "foobar", 00:12:42.016 "method": "nvmf_create_subsystem", 00:12:42.016 "req_id": 1 00:12:42.016 } 00:12:42.016 Got JSON-RPC error response 00:12:42.016 response: 00:12:42.016 { 00:12:42.016 "code": -32603, 00:12:42.016 "message": "Unable to find target foobar" 00:12:42.016 }' 00:12:42.016 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:42.016 { 00:12:42.016 "nqn": "nqn.2016-06.io.spdk:cnode23348", 00:12:42.016 "tgt_name": "foobar", 00:12:42.016 "method": "nvmf_create_subsystem", 00:12:42.016 "req_id": 1 00:12:42.016 } 00:12:42.016 Got JSON-RPC error response 00:12:42.016 response: 00:12:42.016 { 00:12:42.016 "code": -32603, 00:12:42.016 "message": "Unable to find target foobar" 00:12:42.016 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:42.017 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:42.017 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28766 00:12:42.274 [2024-12-10 22:43:49.920510] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28766: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:42.274 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:42.274 { 00:12:42.274 "nqn": "nqn.2016-06.io.spdk:cnode28766", 00:12:42.274 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:42.274 "method": "nvmf_create_subsystem", 00:12:42.274 "req_id": 1 00:12:42.274 } 00:12:42.274 Got JSON-RPC error response 00:12:42.274 response: 00:12:42.274 { 00:12:42.274 "code": -32602, 00:12:42.274 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:42.274 }' 00:12:42.274 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:42.274 { 00:12:42.274 "nqn": "nqn.2016-06.io.spdk:cnode28766", 00:12:42.274 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:42.274 "method": "nvmf_create_subsystem", 00:12:42.274 "req_id": 1 00:12:42.274 } 00:12:42.274 Got JSON-RPC error response 00:12:42.274 response: 00:12:42.274 { 00:12:42.274 "code": -32602, 00:12:42.274 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:42.274 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:42.274 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:42.274 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32658 00:12:42.533 [2024-12-10 22:43:50.205513] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32658: invalid model number 'SPDK_Controller' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:42.533 { 00:12:42.533 "nqn": "nqn.2016-06.io.spdk:cnode32658", 00:12:42.533 "model_number": "SPDK_Controller\u001f", 00:12:42.533 "method": "nvmf_create_subsystem", 00:12:42.533 "req_id": 1 00:12:42.533 } 00:12:42.533 Got JSON-RPC error response 00:12:42.533 response: 00:12:42.533 { 00:12:42.533 "code": -32602, 00:12:42.533 "message": "Invalid MN SPDK_Controller\u001f" 00:12:42.533 }' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:42.533 { 00:12:42.533 "nqn": "nqn.2016-06.io.spdk:cnode32658", 00:12:42.533 "model_number": "SPDK_Controller\u001f", 00:12:42.533 "method": "nvmf_create_subsystem", 00:12:42.533 "req_id": 1 00:12:42.533 } 00:12:42.533 Got JSON-RPC error response 00:12:42.533 response: 00:12:42.533 { 00:12:42.533 "code": -32602, 00:12:42.533 "message": "Invalid MN SPDK_Controller\u001f" 00:12:42.533 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:42.533 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:42.534 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:42.792 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ S == \- ]] 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'SWa|4J yItB@C(}Jp%eV0' 00:12:42.793 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'SWa|4J yItB@C(}Jp%eV0' nqn.2016-06.io.spdk:cnode550 00:12:43.052 [2024-12-10 22:43:50.558611] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode550: invalid serial number 'SWa|4J yItB@C(}Jp%eV0' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:43.052 { 00:12:43.052 "nqn": "nqn.2016-06.io.spdk:cnode550", 00:12:43.052 "serial_number": "SWa|4J yItB@C(}Jp%eV0", 00:12:43.052 "method": "nvmf_create_subsystem", 00:12:43.052 "req_id": 1 00:12:43.052 } 00:12:43.052 Got JSON-RPC error response 00:12:43.052 response: 00:12:43.052 { 00:12:43.052 "code": -32602, 00:12:43.052 "message": "Invalid SN SWa|4J yItB@C(}Jp%eV0" 00:12:43.052 }' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:43.052 { 00:12:43.052 "nqn": "nqn.2016-06.io.spdk:cnode550", 00:12:43.052 "serial_number": "SWa|4J yItB@C(}Jp%eV0", 00:12:43.052 "method": "nvmf_create_subsystem", 00:12:43.052 "req_id": 1 00:12:43.052 } 00:12:43.052 Got JSON-RPC error response 00:12:43.052 response: 00:12:43.052 { 00:12:43.052 "code": -32602, 00:12:43.052 "message": "Invalid SN SWa|4J yItB@C(}Jp%eV0" 00:12:43.052 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:43.052 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:43.053 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W]!)">CLz\QL\VeYN}#2GQc&}BEu_Wq\7`0giA^f[' 00:12:43.054 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'W]!)">CLz\QL\VeYN}#2GQc&}BEu_Wq\7`0giA^f[' nqn.2016-06.io.spdk:cnode10394 00:12:43.620 [2024-12-10 22:43:51.052212] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10394: invalid model number 'W]!)">CLz\QL\VeYN}#2GQc&}BEu_Wq\7`0giA^f[' 00:12:43.620 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:43.620 { 00:12:43.620 "nqn": "nqn.2016-06.io.spdk:cnode10394", 00:12:43.620 "model_number": "W]!)\">CLz\\QL\\VeYN}#2GQc&}BEu_Wq\\7`0giA^f[", 00:12:43.620 "method": "nvmf_create_subsystem", 00:12:43.620 "req_id": 1 00:12:43.620 } 00:12:43.620 Got JSON-RPC error response 00:12:43.620 response: 00:12:43.620 { 00:12:43.620 "code": -32602, 00:12:43.620 "message": "Invalid MN W]!)\">CLz\\QL\\VeYN}#2GQc&}BEu_Wq\\7`0giA^f[" 00:12:43.620 }' 00:12:43.620 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:43.620 { 00:12:43.620 "nqn": "nqn.2016-06.io.spdk:cnode10394", 00:12:43.620 "model_number": "W]!)\">CLz\\QL\\VeYN}#2GQc&}BEu_Wq\\7`0giA^f[", 00:12:43.620 "method": "nvmf_create_subsystem", 00:12:43.620 "req_id": 1 00:12:43.620 } 00:12:43.620 Got JSON-RPC error response 00:12:43.620 response: 00:12:43.620 { 00:12:43.620 "code": -32602, 00:12:43.620 "message": "Invalid MN W]!)\">CLz\\QL\\VeYN}#2GQc&}BEu_Wq\\7`0giA^f[" 00:12:43.620 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:43.620 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:43.620 [2024-12-10 22:43:51.325176] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.620 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:44.187 [2024-12-10 22:43:51.866989] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:44.187 { 00:12:44.187 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:44.187 "listen_address": { 00:12:44.187 "trtype": "tcp", 00:12:44.187 "traddr": "", 00:12:44.187 "trsvcid": "4421" 00:12:44.187 }, 00:12:44.187 "method": "nvmf_subsystem_remove_listener", 00:12:44.187 "req_id": 1 00:12:44.187 } 00:12:44.187 Got JSON-RPC error response 00:12:44.187 response: 00:12:44.187 { 00:12:44.187 "code": -32602, 00:12:44.187 "message": "Invalid parameters" 00:12:44.187 }' 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:44.187 { 00:12:44.187 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:44.187 "listen_address": { 00:12:44.187 "trtype": "tcp", 00:12:44.187 "traddr": "", 00:12:44.187 "trsvcid": "4421" 00:12:44.187 }, 00:12:44.187 "method": "nvmf_subsystem_remove_listener", 00:12:44.187 "req_id": 1 00:12:44.187 } 00:12:44.187 Got JSON-RPC error response 00:12:44.187 response: 00:12:44.187 { 00:12:44.187 "code": -32602, 00:12:44.187 "message": "Invalid parameters" 00:12:44.187 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:44.187 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24629 -i 0 00:12:44.446 [2024-12-10 22:43:52.147900] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24629: invalid cntlid range [0-65519] 00:12:44.446 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:44.446 { 00:12:44.446 "nqn": "nqn.2016-06.io.spdk:cnode24629", 00:12:44.446 "min_cntlid": 0, 00:12:44.446 "method": "nvmf_create_subsystem", 00:12:44.446 "req_id": 1 00:12:44.446 } 00:12:44.446 Got JSON-RPC error response 00:12:44.446 response: 00:12:44.446 { 00:12:44.446 "code": -32602, 00:12:44.446 "message": "Invalid cntlid range [0-65519]" 00:12:44.446 }' 00:12:44.446 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:44.446 { 00:12:44.446 "nqn": "nqn.2016-06.io.spdk:cnode24629", 00:12:44.446 "min_cntlid": 0, 00:12:44.446 "method": "nvmf_create_subsystem", 00:12:44.446 "req_id": 1 00:12:44.446 } 00:12:44.446 Got JSON-RPC error response 00:12:44.446 response: 00:12:44.446 { 00:12:44.446 "code": -32602, 00:12:44.446 "message": "Invalid cntlid range [0-65519]" 00:12:44.446 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:44.446 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17378 -i 65520 00:12:44.704 [2024-12-10 22:43:52.424859] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17378: invalid cntlid range [65520-65519] 00:12:44.961 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:44.961 { 00:12:44.961 "nqn": "nqn.2016-06.io.spdk:cnode17378", 00:12:44.961 "min_cntlid": 65520, 00:12:44.961 "method": "nvmf_create_subsystem", 00:12:44.961 "req_id": 1 00:12:44.961 } 00:12:44.961 Got JSON-RPC error response 00:12:44.961 response: 00:12:44.961 { 00:12:44.961 "code": -32602, 00:12:44.961 "message": "Invalid cntlid range [65520-65519]" 00:12:44.961 }' 00:12:44.961 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:44.961 { 00:12:44.961 "nqn": "nqn.2016-06.io.spdk:cnode17378", 00:12:44.961 "min_cntlid": 65520, 00:12:44.961 "method": "nvmf_create_subsystem", 00:12:44.961 "req_id": 1 00:12:44.961 } 00:12:44.961 Got JSON-RPC error response 00:12:44.961 response: 00:12:44.961 { 00:12:44.961 "code": -32602, 00:12:44.961 "message": "Invalid cntlid range [65520-65519]" 00:12:44.961 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:44.961 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20570 -I 0 00:12:45.219 [2024-12-10 22:43:52.697705] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20570: invalid cntlid range [1-0] 00:12:45.219 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:45.219 { 00:12:45.219 "nqn": "nqn.2016-06.io.spdk:cnode20570", 00:12:45.219 "max_cntlid": 0, 00:12:45.220 "method": "nvmf_create_subsystem", 00:12:45.220 "req_id": 1 00:12:45.220 } 00:12:45.220 Got JSON-RPC error response 00:12:45.220 response: 00:12:45.220 { 00:12:45.220 "code": -32602, 00:12:45.220 "message": "Invalid cntlid range [1-0]" 00:12:45.220 }' 00:12:45.220 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:45.220 { 00:12:45.220 "nqn": "nqn.2016-06.io.spdk:cnode20570", 00:12:45.220 "max_cntlid": 0, 00:12:45.220 "method": "nvmf_create_subsystem", 00:12:45.220 "req_id": 1 00:12:45.220 } 00:12:45.220 Got JSON-RPC error response 00:12:45.220 response: 00:12:45.220 { 00:12:45.220 "code": -32602, 00:12:45.220 "message": "Invalid cntlid range [1-0]" 00:12:45.220 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.220 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27352 -I 65520 00:12:45.478 [2024-12-10 22:43:52.962613] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27352: invalid cntlid range [1-65520] 00:12:45.478 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:45.478 { 00:12:45.478 "nqn": "nqn.2016-06.io.spdk:cnode27352", 00:12:45.478 "max_cntlid": 65520, 00:12:45.478 "method": "nvmf_create_subsystem", 00:12:45.478 "req_id": 1 00:12:45.478 } 00:12:45.478 Got JSON-RPC error response 00:12:45.478 response: 00:12:45.478 { 00:12:45.478 "code": -32602, 00:12:45.478 "message": "Invalid cntlid range [1-65520]" 00:12:45.478 }' 00:12:45.478 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:45.478 { 00:12:45.478 "nqn": "nqn.2016-06.io.spdk:cnode27352", 00:12:45.478 "max_cntlid": 65520, 00:12:45.478 "method": "nvmf_create_subsystem", 00:12:45.478 "req_id": 1 00:12:45.478 } 00:12:45.478 Got JSON-RPC error response 00:12:45.478 response: 00:12:45.478 { 00:12:45.478 "code": -32602, 00:12:45.478 "message": "Invalid cntlid range [1-65520]" 00:12:45.478 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.478 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23060 -i 6 -I 5 00:12:45.735 [2024-12-10 22:43:53.247552] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23060: invalid cntlid range [6-5] 00:12:45.735 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:45.735 { 00:12:45.735 "nqn": "nqn.2016-06.io.spdk:cnode23060", 00:12:45.735 "min_cntlid": 6, 00:12:45.735 "max_cntlid": 5, 00:12:45.735 "method": "nvmf_create_subsystem", 00:12:45.735 "req_id": 1 00:12:45.735 } 00:12:45.735 Got JSON-RPC error response 00:12:45.735 response: 00:12:45.735 { 00:12:45.735 "code": -32602, 00:12:45.735 "message": "Invalid cntlid range [6-5]" 00:12:45.735 }' 00:12:45.735 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:45.735 { 00:12:45.735 "nqn": "nqn.2016-06.io.spdk:cnode23060", 00:12:45.735 "min_cntlid": 6, 00:12:45.735 "max_cntlid": 5, 00:12:45.735 "method": "nvmf_create_subsystem", 00:12:45.735 "req_id": 1 00:12:45.735 } 00:12:45.735 Got JSON-RPC error response 00:12:45.735 response: 00:12:45.735 { 00:12:45.735 "code": -32602, 00:12:45.735 "message": "Invalid cntlid range [6-5]" 00:12:45.736 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:45.736 { 00:12:45.736 "name": "foobar", 00:12:45.736 "method": "nvmf_delete_target", 00:12:45.736 "req_id": 1 00:12:45.736 } 00:12:45.736 Got JSON-RPC error response 00:12:45.736 response: 00:12:45.736 { 00:12:45.736 "code": -32602, 00:12:45.736 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:45.736 }' 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:45.736 { 00:12:45.736 "name": "foobar", 00:12:45.736 "method": "nvmf_delete_target", 00:12:45.736 "req_id": 1 00:12:45.736 } 00:12:45.736 Got JSON-RPC error response 00:12:45.736 response: 00:12:45.736 { 00:12:45.736 "code": -32602, 00:12:45.736 "message": "The specified target doesn't exist, cannot delete it." 00:12:45.736 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.736 rmmod nvme_tcp 00:12:45.736 rmmod nvme_fabrics 00:12:45.736 rmmod nvme_keyring 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 25851 ']' 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 25851 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 25851 ']' 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 25851 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.736 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 25851 00:12:45.994 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.994 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.994 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 25851' 00:12:45.994 killing process with pid 25851 00:12:45.994 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 25851 00:12:45.994 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 25851 00:12:46.251 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.251 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.251 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.251 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.252 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.157 00:12:48.157 real 0m9.256s 00:12:48.157 user 0m22.489s 00:12:48.157 sys 0m2.627s 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:48.157 ************************************ 00:12:48.157 END TEST nvmf_invalid 00:12:48.157 ************************************ 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.157 ************************************ 00:12:48.157 START TEST nvmf_connect_stress 00:12:48.157 ************************************ 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:48.157 * Looking for test storage... 00:12:48.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:48.157 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:48.418 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:48.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.419 --rc genhtml_branch_coverage=1 00:12:48.419 --rc genhtml_function_coverage=1 00:12:48.419 --rc genhtml_legend=1 00:12:48.419 --rc geninfo_all_blocks=1 00:12:48.419 --rc geninfo_unexecuted_blocks=1 00:12:48.419 00:12:48.419 ' 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:48.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.419 --rc genhtml_branch_coverage=1 00:12:48.419 --rc genhtml_function_coverage=1 00:12:48.419 --rc genhtml_legend=1 00:12:48.419 --rc geninfo_all_blocks=1 00:12:48.419 --rc geninfo_unexecuted_blocks=1 00:12:48.419 00:12:48.419 ' 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:48.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.419 --rc genhtml_branch_coverage=1 00:12:48.419 --rc genhtml_function_coverage=1 00:12:48.419 --rc genhtml_legend=1 00:12:48.419 --rc geninfo_all_blocks=1 00:12:48.419 --rc geninfo_unexecuted_blocks=1 00:12:48.419 00:12:48.419 ' 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:48.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.419 --rc genhtml_branch_coverage=1 00:12:48.419 --rc genhtml_function_coverage=1 00:12:48.419 --rc genhtml_legend=1 00:12:48.419 --rc geninfo_all_blocks=1 00:12:48.419 --rc geninfo_unexecuted_blocks=1 00:12:48.419 00:12:48.419 ' 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.419 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.419 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:50.960 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:50.960 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:50.960 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:50.960 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.960 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:12:50.961 00:12:50.961 --- 10.0.0.2 ping statistics --- 00:12:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.961 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:12:50.961 00:12:50.961 --- 10.0.0.1 ping statistics --- 00:12:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.961 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=28500 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 28500 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 28500 ']' 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.961 [2024-12-10 22:43:58.423026] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:12:50.961 [2024-12-10 22:43:58.423129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.961 [2024-12-10 22:43:58.497097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.961 [2024-12-10 22:43:58.556967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.961 [2024-12-10 22:43:58.557018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.961 [2024-12-10 22:43:58.557046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.961 [2024-12-10 22:43:58.557057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.961 [2024-12-10 22:43:58.557066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.961 [2024-12-10 22:43:58.558668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.961 [2024-12-10 22:43:58.558727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.961 [2024-12-10 22:43:58.558723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.961 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.222 [2024-12-10 22:43:58.711472] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.222 [2024-12-10 22:43:58.728540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.222 NULL1 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=28544 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.222 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.483 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.483 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:51.483 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.483 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.483 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.741 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.741 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:51.741 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.741 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.741 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.310 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.310 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:52.310 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.310 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.310 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.570 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.570 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:52.570 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.570 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.570 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.830 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.830 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:52.830 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.830 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.830 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.089 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.089 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:53.089 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.089 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.089 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.347 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.347 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:53.347 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.347 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.347 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.915 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.915 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:53.915 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.915 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.915 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.176 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.176 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:54.176 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.176 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.176 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.437 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.437 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:54.437 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.437 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.437 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.695 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.695 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:54.695 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.695 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.695 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.955 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.955 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:54.955 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.955 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.955 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.524 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.524 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:55.524 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.524 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.524 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.782 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.782 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:55.782 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.782 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.782 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.040 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.040 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:56.040 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.040 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.040 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:56.299 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.299 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.559 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.559 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:56.559 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.559 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.559 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.129 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.129 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:57.129 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.129 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.129 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.386 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.386 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:57.386 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.386 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.386 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.652 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.652 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:57.652 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.652 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.652 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.915 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.915 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:57.915 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.915 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.915 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.173 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.173 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:58.173 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.173 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.173 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.738 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.738 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:58.738 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.738 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.738 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.997 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.997 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:58.997 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.997 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.997 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.256 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.256 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:59.256 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.256 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.256 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.516 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.517 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:59.517 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.517 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.517 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.776 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.776 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:12:59.776 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.776 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.776 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.345 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.345 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:13:00.345 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.345 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.345 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.604 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.604 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:13:00.604 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.604 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.604 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.864 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.864 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:13:00.864 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.864 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.864 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.124 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.124 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:13:01.124 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.124 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.124 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.382 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:01.382 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.382 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 28544 00:13:01.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (28544) - No such process 00:13:01.382 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 28544 00:13:01.382 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:01.382 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.383 rmmod nvme_tcp 00:13:01.383 rmmod nvme_fabrics 00:13:01.383 rmmod nvme_keyring 00:13:01.383 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 28500 ']' 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 28500 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 28500 ']' 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 28500 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 28500 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 28500' 00:13:01.641 killing process with pid 28500 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 28500 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 28500 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.641 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.178 00:13:04.178 real 0m15.580s 00:13:04.178 user 0m38.488s 00:13:04.178 sys 0m6.074s 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.178 ************************************ 00:13:04.178 END TEST nvmf_connect_stress 00:13:04.178 ************************************ 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.178 ************************************ 00:13:04.178 START TEST nvmf_fused_ordering 00:13:04.178 ************************************ 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:04.178 * Looking for test storage... 00:13:04.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.178 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.179 --rc genhtml_branch_coverage=1 00:13:04.179 --rc genhtml_function_coverage=1 00:13:04.179 --rc genhtml_legend=1 00:13:04.179 --rc geninfo_all_blocks=1 00:13:04.179 --rc geninfo_unexecuted_blocks=1 00:13:04.179 00:13:04.179 ' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.179 --rc genhtml_branch_coverage=1 00:13:04.179 --rc genhtml_function_coverage=1 00:13:04.179 --rc genhtml_legend=1 00:13:04.179 --rc geninfo_all_blocks=1 00:13:04.179 --rc geninfo_unexecuted_blocks=1 00:13:04.179 00:13:04.179 ' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.179 --rc genhtml_branch_coverage=1 00:13:04.179 --rc genhtml_function_coverage=1 00:13:04.179 --rc genhtml_legend=1 00:13:04.179 --rc geninfo_all_blocks=1 00:13:04.179 --rc geninfo_unexecuted_blocks=1 00:13:04.179 00:13:04.179 ' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.179 --rc genhtml_branch_coverage=1 00:13:04.179 --rc genhtml_function_coverage=1 00:13:04.179 --rc genhtml_legend=1 00:13:04.179 --rc geninfo_all_blocks=1 00:13:04.179 --rc geninfo_unexecuted_blocks=1 00:13:04.179 00:13:04.179 ' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.179 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.180 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:04.180 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:04.180 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.180 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:06.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:06.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:06.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:06.084 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:06.085 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.085 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:13:06.379 00:13:06.379 --- 10.0.0.2 ping statistics --- 00:13:06.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.379 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:06.379 00:13:06.379 --- 10.0.0.1 ping statistics --- 00:13:06.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.379 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=31801 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 31801 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 31801 ']' 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.379 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.379 [2024-12-10 22:44:14.015089] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:06.379 [2024-12-10 22:44:14.015167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.379 [2024-12-10 22:44:14.088251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.665 [2024-12-10 22:44:14.141991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.665 [2024-12-10 22:44:14.142052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.665 [2024-12-10 22:44:14.142081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.665 [2024-12-10 22:44:14.142092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.665 [2024-12-10 22:44:14.142102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.665 [2024-12-10 22:44:14.142735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 [2024-12-10 22:44:14.282976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 [2024-12-10 22:44:14.299201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 NULL1 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.666 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:06.666 [2024-12-10 22:44:14.343574] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:06.666 [2024-12-10 22:44:14.343613] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31827 ] 00:13:07.235 Attached to nqn.2016-06.io.spdk:cnode1 00:13:07.235 Namespace ID: 1 size: 1GB 00:13:07.235 fused_ordering(0) 00:13:07.235 fused_ordering(1) 00:13:07.235 fused_ordering(2) 00:13:07.235 fused_ordering(3) 00:13:07.235 fused_ordering(4) 00:13:07.235 fused_ordering(5) 00:13:07.235 fused_ordering(6) 00:13:07.235 fused_ordering(7) 00:13:07.235 fused_ordering(8) 00:13:07.235 fused_ordering(9) 00:13:07.235 fused_ordering(10) 00:13:07.235 fused_ordering(11) 00:13:07.235 fused_ordering(12) 00:13:07.235 fused_ordering(13) 00:13:07.235 fused_ordering(14) 00:13:07.235 fused_ordering(15) 00:13:07.235 fused_ordering(16) 00:13:07.235 fused_ordering(17) 00:13:07.235 fused_ordering(18) 00:13:07.235 fused_ordering(19) 00:13:07.235 fused_ordering(20) 00:13:07.235 fused_ordering(21) 00:13:07.235 fused_ordering(22) 00:13:07.235 fused_ordering(23) 00:13:07.235 fused_ordering(24) 00:13:07.235 fused_ordering(25) 00:13:07.235 fused_ordering(26) 00:13:07.235 fused_ordering(27) 00:13:07.235 fused_ordering(28) 00:13:07.235 fused_ordering(29) 00:13:07.235 fused_ordering(30) 00:13:07.235 fused_ordering(31) 00:13:07.235 fused_ordering(32) 00:13:07.235 fused_ordering(33) 00:13:07.235 fused_ordering(34) 00:13:07.235 fused_ordering(35) 00:13:07.235 fused_ordering(36) 00:13:07.235 fused_ordering(37) 00:13:07.235 fused_ordering(38) 00:13:07.235 fused_ordering(39) 00:13:07.235 fused_ordering(40) 00:13:07.235 fused_ordering(41) 00:13:07.235 fused_ordering(42) 00:13:07.235 fused_ordering(43) 00:13:07.235 fused_ordering(44) 00:13:07.235 fused_ordering(45) 00:13:07.235 fused_ordering(46) 00:13:07.235 fused_ordering(47) 00:13:07.235 fused_ordering(48) 00:13:07.235 fused_ordering(49) 00:13:07.235 fused_ordering(50) 00:13:07.235 fused_ordering(51) 00:13:07.235 fused_ordering(52) 00:13:07.235 fused_ordering(53) 00:13:07.235 fused_ordering(54) 00:13:07.235 fused_ordering(55) 00:13:07.235 fused_ordering(56) 00:13:07.235 fused_ordering(57) 00:13:07.235 fused_ordering(58) 00:13:07.235 fused_ordering(59) 00:13:07.235 fused_ordering(60) 00:13:07.235 fused_ordering(61) 00:13:07.235 fused_ordering(62) 00:13:07.235 fused_ordering(63) 00:13:07.235 fused_ordering(64) 00:13:07.235 fused_ordering(65) 00:13:07.235 fused_ordering(66) 00:13:07.235 fused_ordering(67) 00:13:07.235 fused_ordering(68) 00:13:07.235 fused_ordering(69) 00:13:07.235 fused_ordering(70) 00:13:07.235 fused_ordering(71) 00:13:07.235 fused_ordering(72) 00:13:07.235 fused_ordering(73) 00:13:07.235 fused_ordering(74) 00:13:07.235 fused_ordering(75) 00:13:07.235 fused_ordering(76) 00:13:07.235 fused_ordering(77) 00:13:07.235 fused_ordering(78) 00:13:07.235 fused_ordering(79) 00:13:07.235 fused_ordering(80) 00:13:07.235 fused_ordering(81) 00:13:07.235 fused_ordering(82) 00:13:07.235 fused_ordering(83) 00:13:07.235 fused_ordering(84) 00:13:07.235 fused_ordering(85) 00:13:07.235 fused_ordering(86) 00:13:07.235 fused_ordering(87) 00:13:07.235 fused_ordering(88) 00:13:07.235 fused_ordering(89) 00:13:07.235 fused_ordering(90) 00:13:07.235 fused_ordering(91) 00:13:07.235 fused_ordering(92) 00:13:07.235 fused_ordering(93) 00:13:07.235 fused_ordering(94) 00:13:07.235 fused_ordering(95) 00:13:07.235 fused_ordering(96) 00:13:07.235 fused_ordering(97) 00:13:07.235 fused_ordering(98) 00:13:07.235 fused_ordering(99) 00:13:07.235 fused_ordering(100) 00:13:07.235 fused_ordering(101) 00:13:07.235 fused_ordering(102) 00:13:07.235 fused_ordering(103) 00:13:07.235 fused_ordering(104) 00:13:07.235 fused_ordering(105) 00:13:07.235 fused_ordering(106) 00:13:07.235 fused_ordering(107) 00:13:07.235 fused_ordering(108) 00:13:07.235 fused_ordering(109) 00:13:07.235 fused_ordering(110) 00:13:07.235 fused_ordering(111) 00:13:07.235 fused_ordering(112) 00:13:07.235 fused_ordering(113) 00:13:07.235 fused_ordering(114) 00:13:07.235 fused_ordering(115) 00:13:07.235 fused_ordering(116) 00:13:07.235 fused_ordering(117) 00:13:07.235 fused_ordering(118) 00:13:07.235 fused_ordering(119) 00:13:07.235 fused_ordering(120) 00:13:07.235 fused_ordering(121) 00:13:07.235 fused_ordering(122) 00:13:07.235 fused_ordering(123) 00:13:07.235 fused_ordering(124) 00:13:07.235 fused_ordering(125) 00:13:07.235 fused_ordering(126) 00:13:07.235 fused_ordering(127) 00:13:07.235 fused_ordering(128) 00:13:07.235 fused_ordering(129) 00:13:07.235 fused_ordering(130) 00:13:07.235 fused_ordering(131) 00:13:07.235 fused_ordering(132) 00:13:07.235 fused_ordering(133) 00:13:07.235 fused_ordering(134) 00:13:07.235 fused_ordering(135) 00:13:07.235 fused_ordering(136) 00:13:07.235 fused_ordering(137) 00:13:07.235 fused_ordering(138) 00:13:07.235 fused_ordering(139) 00:13:07.235 fused_ordering(140) 00:13:07.235 fused_ordering(141) 00:13:07.235 fused_ordering(142) 00:13:07.235 fused_ordering(143) 00:13:07.235 fused_ordering(144) 00:13:07.235 fused_ordering(145) 00:13:07.235 fused_ordering(146) 00:13:07.235 fused_ordering(147) 00:13:07.235 fused_ordering(148) 00:13:07.235 fused_ordering(149) 00:13:07.235 fused_ordering(150) 00:13:07.235 fused_ordering(151) 00:13:07.235 fused_ordering(152) 00:13:07.235 fused_ordering(153) 00:13:07.235 fused_ordering(154) 00:13:07.235 fused_ordering(155) 00:13:07.235 fused_ordering(156) 00:13:07.235 fused_ordering(157) 00:13:07.235 fused_ordering(158) 00:13:07.235 fused_ordering(159) 00:13:07.235 fused_ordering(160) 00:13:07.235 fused_ordering(161) 00:13:07.235 fused_ordering(162) 00:13:07.235 fused_ordering(163) 00:13:07.235 fused_ordering(164) 00:13:07.235 fused_ordering(165) 00:13:07.235 fused_ordering(166) 00:13:07.235 fused_ordering(167) 00:13:07.235 fused_ordering(168) 00:13:07.236 fused_ordering(169) 00:13:07.236 fused_ordering(170) 00:13:07.236 fused_ordering(171) 00:13:07.236 fused_ordering(172) 00:13:07.236 fused_ordering(173) 00:13:07.236 fused_ordering(174) 00:13:07.236 fused_ordering(175) 00:13:07.236 fused_ordering(176) 00:13:07.236 fused_ordering(177) 00:13:07.236 fused_ordering(178) 00:13:07.236 fused_ordering(179) 00:13:07.236 fused_ordering(180) 00:13:07.236 fused_ordering(181) 00:13:07.236 fused_ordering(182) 00:13:07.236 fused_ordering(183) 00:13:07.236 fused_ordering(184) 00:13:07.236 fused_ordering(185) 00:13:07.236 fused_ordering(186) 00:13:07.236 fused_ordering(187) 00:13:07.236 fused_ordering(188) 00:13:07.236 fused_ordering(189) 00:13:07.236 fused_ordering(190) 00:13:07.236 fused_ordering(191) 00:13:07.236 fused_ordering(192) 00:13:07.236 fused_ordering(193) 00:13:07.236 fused_ordering(194) 00:13:07.236 fused_ordering(195) 00:13:07.236 fused_ordering(196) 00:13:07.236 fused_ordering(197) 00:13:07.236 fused_ordering(198) 00:13:07.236 fused_ordering(199) 00:13:07.236 fused_ordering(200) 00:13:07.236 fused_ordering(201) 00:13:07.236 fused_ordering(202) 00:13:07.236 fused_ordering(203) 00:13:07.236 fused_ordering(204) 00:13:07.236 fused_ordering(205) 00:13:07.495 fused_ordering(206) 00:13:07.495 fused_ordering(207) 00:13:07.495 fused_ordering(208) 00:13:07.495 fused_ordering(209) 00:13:07.495 fused_ordering(210) 00:13:07.495 fused_ordering(211) 00:13:07.495 fused_ordering(212) 00:13:07.495 fused_ordering(213) 00:13:07.495 fused_ordering(214) 00:13:07.495 fused_ordering(215) 00:13:07.495 fused_ordering(216) 00:13:07.495 fused_ordering(217) 00:13:07.495 fused_ordering(218) 00:13:07.495 fused_ordering(219) 00:13:07.495 fused_ordering(220) 00:13:07.495 fused_ordering(221) 00:13:07.495 fused_ordering(222) 00:13:07.495 fused_ordering(223) 00:13:07.495 fused_ordering(224) 00:13:07.495 fused_ordering(225) 00:13:07.495 fused_ordering(226) 00:13:07.495 fused_ordering(227) 00:13:07.495 fused_ordering(228) 00:13:07.495 fused_ordering(229) 00:13:07.495 fused_ordering(230) 00:13:07.495 fused_ordering(231) 00:13:07.495 fused_ordering(232) 00:13:07.495 fused_ordering(233) 00:13:07.495 fused_ordering(234) 00:13:07.495 fused_ordering(235) 00:13:07.495 fused_ordering(236) 00:13:07.495 fused_ordering(237) 00:13:07.495 fused_ordering(238) 00:13:07.495 fused_ordering(239) 00:13:07.495 fused_ordering(240) 00:13:07.495 fused_ordering(241) 00:13:07.495 fused_ordering(242) 00:13:07.495 fused_ordering(243) 00:13:07.495 fused_ordering(244) 00:13:07.495 fused_ordering(245) 00:13:07.495 fused_ordering(246) 00:13:07.495 fused_ordering(247) 00:13:07.495 fused_ordering(248) 00:13:07.495 fused_ordering(249) 00:13:07.495 fused_ordering(250) 00:13:07.495 fused_ordering(251) 00:13:07.495 fused_ordering(252) 00:13:07.495 fused_ordering(253) 00:13:07.495 fused_ordering(254) 00:13:07.495 fused_ordering(255) 00:13:07.495 fused_ordering(256) 00:13:07.495 fused_ordering(257) 00:13:07.495 fused_ordering(258) 00:13:07.495 fused_ordering(259) 00:13:07.495 fused_ordering(260) 00:13:07.495 fused_ordering(261) 00:13:07.495 fused_ordering(262) 00:13:07.495 fused_ordering(263) 00:13:07.495 fused_ordering(264) 00:13:07.495 fused_ordering(265) 00:13:07.495 fused_ordering(266) 00:13:07.495 fused_ordering(267) 00:13:07.495 fused_ordering(268) 00:13:07.495 fused_ordering(269) 00:13:07.495 fused_ordering(270) 00:13:07.495 fused_ordering(271) 00:13:07.495 fused_ordering(272) 00:13:07.495 fused_ordering(273) 00:13:07.495 fused_ordering(274) 00:13:07.495 fused_ordering(275) 00:13:07.495 fused_ordering(276) 00:13:07.495 fused_ordering(277) 00:13:07.495 fused_ordering(278) 00:13:07.495 fused_ordering(279) 00:13:07.495 fused_ordering(280) 00:13:07.495 fused_ordering(281) 00:13:07.495 fused_ordering(282) 00:13:07.495 fused_ordering(283) 00:13:07.495 fused_ordering(284) 00:13:07.495 fused_ordering(285) 00:13:07.495 fused_ordering(286) 00:13:07.495 fused_ordering(287) 00:13:07.495 fused_ordering(288) 00:13:07.495 fused_ordering(289) 00:13:07.495 fused_ordering(290) 00:13:07.495 fused_ordering(291) 00:13:07.495 fused_ordering(292) 00:13:07.495 fused_ordering(293) 00:13:07.495 fused_ordering(294) 00:13:07.495 fused_ordering(295) 00:13:07.495 fused_ordering(296) 00:13:07.495 fused_ordering(297) 00:13:07.495 fused_ordering(298) 00:13:07.495 fused_ordering(299) 00:13:07.495 fused_ordering(300) 00:13:07.495 fused_ordering(301) 00:13:07.495 fused_ordering(302) 00:13:07.495 fused_ordering(303) 00:13:07.495 fused_ordering(304) 00:13:07.495 fused_ordering(305) 00:13:07.495 fused_ordering(306) 00:13:07.495 fused_ordering(307) 00:13:07.495 fused_ordering(308) 00:13:07.495 fused_ordering(309) 00:13:07.495 fused_ordering(310) 00:13:07.495 fused_ordering(311) 00:13:07.495 fused_ordering(312) 00:13:07.495 fused_ordering(313) 00:13:07.495 fused_ordering(314) 00:13:07.495 fused_ordering(315) 00:13:07.495 fused_ordering(316) 00:13:07.495 fused_ordering(317) 00:13:07.495 fused_ordering(318) 00:13:07.495 fused_ordering(319) 00:13:07.495 fused_ordering(320) 00:13:07.495 fused_ordering(321) 00:13:07.495 fused_ordering(322) 00:13:07.495 fused_ordering(323) 00:13:07.495 fused_ordering(324) 00:13:07.495 fused_ordering(325) 00:13:07.495 fused_ordering(326) 00:13:07.495 fused_ordering(327) 00:13:07.495 fused_ordering(328) 00:13:07.495 fused_ordering(329) 00:13:07.495 fused_ordering(330) 00:13:07.495 fused_ordering(331) 00:13:07.495 fused_ordering(332) 00:13:07.495 fused_ordering(333) 00:13:07.495 fused_ordering(334) 00:13:07.495 fused_ordering(335) 00:13:07.495 fused_ordering(336) 00:13:07.495 fused_ordering(337) 00:13:07.495 fused_ordering(338) 00:13:07.495 fused_ordering(339) 00:13:07.495 fused_ordering(340) 00:13:07.495 fused_ordering(341) 00:13:07.495 fused_ordering(342) 00:13:07.495 fused_ordering(343) 00:13:07.495 fused_ordering(344) 00:13:07.495 fused_ordering(345) 00:13:07.495 fused_ordering(346) 00:13:07.495 fused_ordering(347) 00:13:07.495 fused_ordering(348) 00:13:07.495 fused_ordering(349) 00:13:07.495 fused_ordering(350) 00:13:07.495 fused_ordering(351) 00:13:07.495 fused_ordering(352) 00:13:07.495 fused_ordering(353) 00:13:07.495 fused_ordering(354) 00:13:07.495 fused_ordering(355) 00:13:07.495 fused_ordering(356) 00:13:07.495 fused_ordering(357) 00:13:07.495 fused_ordering(358) 00:13:07.495 fused_ordering(359) 00:13:07.495 fused_ordering(360) 00:13:07.495 fused_ordering(361) 00:13:07.495 fused_ordering(362) 00:13:07.495 fused_ordering(363) 00:13:07.495 fused_ordering(364) 00:13:07.495 fused_ordering(365) 00:13:07.495 fused_ordering(366) 00:13:07.495 fused_ordering(367) 00:13:07.495 fused_ordering(368) 00:13:07.495 fused_ordering(369) 00:13:07.495 fused_ordering(370) 00:13:07.495 fused_ordering(371) 00:13:07.495 fused_ordering(372) 00:13:07.495 fused_ordering(373) 00:13:07.495 fused_ordering(374) 00:13:07.495 fused_ordering(375) 00:13:07.495 fused_ordering(376) 00:13:07.495 fused_ordering(377) 00:13:07.495 fused_ordering(378) 00:13:07.495 fused_ordering(379) 00:13:07.495 fused_ordering(380) 00:13:07.495 fused_ordering(381) 00:13:07.495 fused_ordering(382) 00:13:07.495 fused_ordering(383) 00:13:07.495 fused_ordering(384) 00:13:07.495 fused_ordering(385) 00:13:07.495 fused_ordering(386) 00:13:07.495 fused_ordering(387) 00:13:07.495 fused_ordering(388) 00:13:07.495 fused_ordering(389) 00:13:07.495 fused_ordering(390) 00:13:07.495 fused_ordering(391) 00:13:07.495 fused_ordering(392) 00:13:07.495 fused_ordering(393) 00:13:07.495 fused_ordering(394) 00:13:07.495 fused_ordering(395) 00:13:07.495 fused_ordering(396) 00:13:07.495 fused_ordering(397) 00:13:07.495 fused_ordering(398) 00:13:07.495 fused_ordering(399) 00:13:07.495 fused_ordering(400) 00:13:07.495 fused_ordering(401) 00:13:07.495 fused_ordering(402) 00:13:07.495 fused_ordering(403) 00:13:07.495 fused_ordering(404) 00:13:07.495 fused_ordering(405) 00:13:07.495 fused_ordering(406) 00:13:07.495 fused_ordering(407) 00:13:07.495 fused_ordering(408) 00:13:07.495 fused_ordering(409) 00:13:07.495 fused_ordering(410) 00:13:08.064 fused_ordering(411) 00:13:08.064 fused_ordering(412) 00:13:08.064 fused_ordering(413) 00:13:08.064 fused_ordering(414) 00:13:08.064 fused_ordering(415) 00:13:08.064 fused_ordering(416) 00:13:08.064 fused_ordering(417) 00:13:08.064 fused_ordering(418) 00:13:08.064 fused_ordering(419) 00:13:08.064 fused_ordering(420) 00:13:08.064 fused_ordering(421) 00:13:08.064 fused_ordering(422) 00:13:08.064 fused_ordering(423) 00:13:08.064 fused_ordering(424) 00:13:08.064 fused_ordering(425) 00:13:08.064 fused_ordering(426) 00:13:08.064 fused_ordering(427) 00:13:08.064 fused_ordering(428) 00:13:08.064 fused_ordering(429) 00:13:08.064 fused_ordering(430) 00:13:08.064 fused_ordering(431) 00:13:08.064 fused_ordering(432) 00:13:08.064 fused_ordering(433) 00:13:08.064 fused_ordering(434) 00:13:08.064 fused_ordering(435) 00:13:08.064 fused_ordering(436) 00:13:08.064 fused_ordering(437) 00:13:08.064 fused_ordering(438) 00:13:08.064 fused_ordering(439) 00:13:08.064 fused_ordering(440) 00:13:08.064 fused_ordering(441) 00:13:08.064 fused_ordering(442) 00:13:08.064 fused_ordering(443) 00:13:08.064 fused_ordering(444) 00:13:08.064 fused_ordering(445) 00:13:08.064 fused_ordering(446) 00:13:08.064 fused_ordering(447) 00:13:08.064 fused_ordering(448) 00:13:08.064 fused_ordering(449) 00:13:08.064 fused_ordering(450) 00:13:08.064 fused_ordering(451) 00:13:08.064 fused_ordering(452) 00:13:08.064 fused_ordering(453) 00:13:08.064 fused_ordering(454) 00:13:08.064 fused_ordering(455) 00:13:08.064 fused_ordering(456) 00:13:08.064 fused_ordering(457) 00:13:08.064 fused_ordering(458) 00:13:08.064 fused_ordering(459) 00:13:08.064 fused_ordering(460) 00:13:08.064 fused_ordering(461) 00:13:08.064 fused_ordering(462) 00:13:08.064 fused_ordering(463) 00:13:08.064 fused_ordering(464) 00:13:08.064 fused_ordering(465) 00:13:08.064 fused_ordering(466) 00:13:08.064 fused_ordering(467) 00:13:08.064 fused_ordering(468) 00:13:08.064 fused_ordering(469) 00:13:08.064 fused_ordering(470) 00:13:08.064 fused_ordering(471) 00:13:08.064 fused_ordering(472) 00:13:08.064 fused_ordering(473) 00:13:08.064 fused_ordering(474) 00:13:08.064 fused_ordering(475) 00:13:08.064 fused_ordering(476) 00:13:08.064 fused_ordering(477) 00:13:08.064 fused_ordering(478) 00:13:08.064 fused_ordering(479) 00:13:08.064 fused_ordering(480) 00:13:08.064 fused_ordering(481) 00:13:08.064 fused_ordering(482) 00:13:08.064 fused_ordering(483) 00:13:08.064 fused_ordering(484) 00:13:08.064 fused_ordering(485) 00:13:08.064 fused_ordering(486) 00:13:08.064 fused_ordering(487) 00:13:08.064 fused_ordering(488) 00:13:08.064 fused_ordering(489) 00:13:08.064 fused_ordering(490) 00:13:08.064 fused_ordering(491) 00:13:08.064 fused_ordering(492) 00:13:08.064 fused_ordering(493) 00:13:08.064 fused_ordering(494) 00:13:08.064 fused_ordering(495) 00:13:08.064 fused_ordering(496) 00:13:08.064 fused_ordering(497) 00:13:08.064 fused_ordering(498) 00:13:08.064 fused_ordering(499) 00:13:08.064 fused_ordering(500) 00:13:08.064 fused_ordering(501) 00:13:08.064 fused_ordering(502) 00:13:08.064 fused_ordering(503) 00:13:08.064 fused_ordering(504) 00:13:08.064 fused_ordering(505) 00:13:08.064 fused_ordering(506) 00:13:08.064 fused_ordering(507) 00:13:08.064 fused_ordering(508) 00:13:08.064 fused_ordering(509) 00:13:08.064 fused_ordering(510) 00:13:08.064 fused_ordering(511) 00:13:08.064 fused_ordering(512) 00:13:08.064 fused_ordering(513) 00:13:08.064 fused_ordering(514) 00:13:08.064 fused_ordering(515) 00:13:08.064 fused_ordering(516) 00:13:08.064 fused_ordering(517) 00:13:08.064 fused_ordering(518) 00:13:08.064 fused_ordering(519) 00:13:08.064 fused_ordering(520) 00:13:08.064 fused_ordering(521) 00:13:08.064 fused_ordering(522) 00:13:08.064 fused_ordering(523) 00:13:08.064 fused_ordering(524) 00:13:08.064 fused_ordering(525) 00:13:08.064 fused_ordering(526) 00:13:08.064 fused_ordering(527) 00:13:08.064 fused_ordering(528) 00:13:08.064 fused_ordering(529) 00:13:08.064 fused_ordering(530) 00:13:08.064 fused_ordering(531) 00:13:08.064 fused_ordering(532) 00:13:08.064 fused_ordering(533) 00:13:08.064 fused_ordering(534) 00:13:08.064 fused_ordering(535) 00:13:08.064 fused_ordering(536) 00:13:08.064 fused_ordering(537) 00:13:08.064 fused_ordering(538) 00:13:08.064 fused_ordering(539) 00:13:08.064 fused_ordering(540) 00:13:08.064 fused_ordering(541) 00:13:08.064 fused_ordering(542) 00:13:08.064 fused_ordering(543) 00:13:08.064 fused_ordering(544) 00:13:08.064 fused_ordering(545) 00:13:08.064 fused_ordering(546) 00:13:08.064 fused_ordering(547) 00:13:08.064 fused_ordering(548) 00:13:08.064 fused_ordering(549) 00:13:08.064 fused_ordering(550) 00:13:08.064 fused_ordering(551) 00:13:08.064 fused_ordering(552) 00:13:08.064 fused_ordering(553) 00:13:08.064 fused_ordering(554) 00:13:08.064 fused_ordering(555) 00:13:08.064 fused_ordering(556) 00:13:08.064 fused_ordering(557) 00:13:08.064 fused_ordering(558) 00:13:08.064 fused_ordering(559) 00:13:08.064 fused_ordering(560) 00:13:08.064 fused_ordering(561) 00:13:08.064 fused_ordering(562) 00:13:08.064 fused_ordering(563) 00:13:08.064 fused_ordering(564) 00:13:08.064 fused_ordering(565) 00:13:08.064 fused_ordering(566) 00:13:08.064 fused_ordering(567) 00:13:08.064 fused_ordering(568) 00:13:08.064 fused_ordering(569) 00:13:08.064 fused_ordering(570) 00:13:08.064 fused_ordering(571) 00:13:08.064 fused_ordering(572) 00:13:08.064 fused_ordering(573) 00:13:08.064 fused_ordering(574) 00:13:08.064 fused_ordering(575) 00:13:08.064 fused_ordering(576) 00:13:08.064 fused_ordering(577) 00:13:08.064 fused_ordering(578) 00:13:08.064 fused_ordering(579) 00:13:08.064 fused_ordering(580) 00:13:08.064 fused_ordering(581) 00:13:08.064 fused_ordering(582) 00:13:08.064 fused_ordering(583) 00:13:08.064 fused_ordering(584) 00:13:08.064 fused_ordering(585) 00:13:08.064 fused_ordering(586) 00:13:08.064 fused_ordering(587) 00:13:08.064 fused_ordering(588) 00:13:08.064 fused_ordering(589) 00:13:08.064 fused_ordering(590) 00:13:08.064 fused_ordering(591) 00:13:08.064 fused_ordering(592) 00:13:08.064 fused_ordering(593) 00:13:08.064 fused_ordering(594) 00:13:08.064 fused_ordering(595) 00:13:08.064 fused_ordering(596) 00:13:08.064 fused_ordering(597) 00:13:08.064 fused_ordering(598) 00:13:08.064 fused_ordering(599) 00:13:08.064 fused_ordering(600) 00:13:08.064 fused_ordering(601) 00:13:08.064 fused_ordering(602) 00:13:08.064 fused_ordering(603) 00:13:08.064 fused_ordering(604) 00:13:08.064 fused_ordering(605) 00:13:08.064 fused_ordering(606) 00:13:08.064 fused_ordering(607) 00:13:08.064 fused_ordering(608) 00:13:08.064 fused_ordering(609) 00:13:08.064 fused_ordering(610) 00:13:08.064 fused_ordering(611) 00:13:08.064 fused_ordering(612) 00:13:08.064 fused_ordering(613) 00:13:08.064 fused_ordering(614) 00:13:08.064 fused_ordering(615) 00:13:08.323 fused_ordering(616) 00:13:08.323 fused_ordering(617) 00:13:08.323 fused_ordering(618) 00:13:08.323 fused_ordering(619) 00:13:08.323 fused_ordering(620) 00:13:08.323 fused_ordering(621) 00:13:08.323 fused_ordering(622) 00:13:08.323 fused_ordering(623) 00:13:08.323 fused_ordering(624) 00:13:08.323 fused_ordering(625) 00:13:08.323 fused_ordering(626) 00:13:08.323 fused_ordering(627) 00:13:08.323 fused_ordering(628) 00:13:08.323 fused_ordering(629) 00:13:08.323 fused_ordering(630) 00:13:08.323 fused_ordering(631) 00:13:08.323 fused_ordering(632) 00:13:08.323 fused_ordering(633) 00:13:08.323 fused_ordering(634) 00:13:08.323 fused_ordering(635) 00:13:08.323 fused_ordering(636) 00:13:08.323 fused_ordering(637) 00:13:08.323 fused_ordering(638) 00:13:08.323 fused_ordering(639) 00:13:08.323 fused_ordering(640) 00:13:08.323 fused_ordering(641) 00:13:08.323 fused_ordering(642) 00:13:08.323 fused_ordering(643) 00:13:08.323 fused_ordering(644) 00:13:08.323 fused_ordering(645) 00:13:08.323 fused_ordering(646) 00:13:08.323 fused_ordering(647) 00:13:08.323 fused_ordering(648) 00:13:08.323 fused_ordering(649) 00:13:08.323 fused_ordering(650) 00:13:08.323 fused_ordering(651) 00:13:08.323 fused_ordering(652) 00:13:08.323 fused_ordering(653) 00:13:08.323 fused_ordering(654) 00:13:08.323 fused_ordering(655) 00:13:08.323 fused_ordering(656) 00:13:08.323 fused_ordering(657) 00:13:08.323 fused_ordering(658) 00:13:08.323 fused_ordering(659) 00:13:08.323 fused_ordering(660) 00:13:08.323 fused_ordering(661) 00:13:08.323 fused_ordering(662) 00:13:08.323 fused_ordering(663) 00:13:08.323 fused_ordering(664) 00:13:08.323 fused_ordering(665) 00:13:08.323 fused_ordering(666) 00:13:08.323 fused_ordering(667) 00:13:08.323 fused_ordering(668) 00:13:08.323 fused_ordering(669) 00:13:08.323 fused_ordering(670) 00:13:08.323 fused_ordering(671) 00:13:08.323 fused_ordering(672) 00:13:08.323 fused_ordering(673) 00:13:08.323 fused_ordering(674) 00:13:08.323 fused_ordering(675) 00:13:08.323 fused_ordering(676) 00:13:08.323 fused_ordering(677) 00:13:08.323 fused_ordering(678) 00:13:08.323 fused_ordering(679) 00:13:08.323 fused_ordering(680) 00:13:08.323 fused_ordering(681) 00:13:08.323 fused_ordering(682) 00:13:08.323 fused_ordering(683) 00:13:08.323 fused_ordering(684) 00:13:08.323 fused_ordering(685) 00:13:08.323 fused_ordering(686) 00:13:08.323 fused_ordering(687) 00:13:08.323 fused_ordering(688) 00:13:08.323 fused_ordering(689) 00:13:08.323 fused_ordering(690) 00:13:08.323 fused_ordering(691) 00:13:08.323 fused_ordering(692) 00:13:08.323 fused_ordering(693) 00:13:08.323 fused_ordering(694) 00:13:08.323 fused_ordering(695) 00:13:08.323 fused_ordering(696) 00:13:08.323 fused_ordering(697) 00:13:08.323 fused_ordering(698) 00:13:08.323 fused_ordering(699) 00:13:08.323 fused_ordering(700) 00:13:08.323 fused_ordering(701) 00:13:08.323 fused_ordering(702) 00:13:08.323 fused_ordering(703) 00:13:08.323 fused_ordering(704) 00:13:08.323 fused_ordering(705) 00:13:08.323 fused_ordering(706) 00:13:08.323 fused_ordering(707) 00:13:08.323 fused_ordering(708) 00:13:08.323 fused_ordering(709) 00:13:08.323 fused_ordering(710) 00:13:08.323 fused_ordering(711) 00:13:08.323 fused_ordering(712) 00:13:08.323 fused_ordering(713) 00:13:08.323 fused_ordering(714) 00:13:08.323 fused_ordering(715) 00:13:08.323 fused_ordering(716) 00:13:08.323 fused_ordering(717) 00:13:08.323 fused_ordering(718) 00:13:08.323 fused_ordering(719) 00:13:08.323 fused_ordering(720) 00:13:08.323 fused_ordering(721) 00:13:08.323 fused_ordering(722) 00:13:08.323 fused_ordering(723) 00:13:08.323 fused_ordering(724) 00:13:08.323 fused_ordering(725) 00:13:08.323 fused_ordering(726) 00:13:08.323 fused_ordering(727) 00:13:08.323 fused_ordering(728) 00:13:08.323 fused_ordering(729) 00:13:08.323 fused_ordering(730) 00:13:08.323 fused_ordering(731) 00:13:08.323 fused_ordering(732) 00:13:08.323 fused_ordering(733) 00:13:08.323 fused_ordering(734) 00:13:08.323 fused_ordering(735) 00:13:08.323 fused_ordering(736) 00:13:08.323 fused_ordering(737) 00:13:08.323 fused_ordering(738) 00:13:08.323 fused_ordering(739) 00:13:08.323 fused_ordering(740) 00:13:08.323 fused_ordering(741) 00:13:08.323 fused_ordering(742) 00:13:08.323 fused_ordering(743) 00:13:08.323 fused_ordering(744) 00:13:08.323 fused_ordering(745) 00:13:08.323 fused_ordering(746) 00:13:08.323 fused_ordering(747) 00:13:08.323 fused_ordering(748) 00:13:08.323 fused_ordering(749) 00:13:08.323 fused_ordering(750) 00:13:08.323 fused_ordering(751) 00:13:08.323 fused_ordering(752) 00:13:08.323 fused_ordering(753) 00:13:08.323 fused_ordering(754) 00:13:08.323 fused_ordering(755) 00:13:08.323 fused_ordering(756) 00:13:08.323 fused_ordering(757) 00:13:08.323 fused_ordering(758) 00:13:08.323 fused_ordering(759) 00:13:08.323 fused_ordering(760) 00:13:08.323 fused_ordering(761) 00:13:08.323 fused_ordering(762) 00:13:08.323 fused_ordering(763) 00:13:08.323 fused_ordering(764) 00:13:08.323 fused_ordering(765) 00:13:08.323 fused_ordering(766) 00:13:08.323 fused_ordering(767) 00:13:08.323 fused_ordering(768) 00:13:08.323 fused_ordering(769) 00:13:08.323 fused_ordering(770) 00:13:08.323 fused_ordering(771) 00:13:08.323 fused_ordering(772) 00:13:08.323 fused_ordering(773) 00:13:08.323 fused_ordering(774) 00:13:08.323 fused_ordering(775) 00:13:08.323 fused_ordering(776) 00:13:08.323 fused_ordering(777) 00:13:08.323 fused_ordering(778) 00:13:08.323 fused_ordering(779) 00:13:08.323 fused_ordering(780) 00:13:08.323 fused_ordering(781) 00:13:08.323 fused_ordering(782) 00:13:08.323 fused_ordering(783) 00:13:08.323 fused_ordering(784) 00:13:08.323 fused_ordering(785) 00:13:08.323 fused_ordering(786) 00:13:08.323 fused_ordering(787) 00:13:08.324 fused_ordering(788) 00:13:08.324 fused_ordering(789) 00:13:08.324 fused_ordering(790) 00:13:08.324 fused_ordering(791) 00:13:08.324 fused_ordering(792) 00:13:08.324 fused_ordering(793) 00:13:08.324 fused_ordering(794) 00:13:08.324 fused_ordering(795) 00:13:08.324 fused_ordering(796) 00:13:08.324 fused_ordering(797) 00:13:08.324 fused_ordering(798) 00:13:08.324 fused_ordering(799) 00:13:08.324 fused_ordering(800) 00:13:08.324 fused_ordering(801) 00:13:08.324 fused_ordering(802) 00:13:08.324 fused_ordering(803) 00:13:08.324 fused_ordering(804) 00:13:08.324 fused_ordering(805) 00:13:08.324 fused_ordering(806) 00:13:08.324 fused_ordering(807) 00:13:08.324 fused_ordering(808) 00:13:08.324 fused_ordering(809) 00:13:08.324 fused_ordering(810) 00:13:08.324 fused_ordering(811) 00:13:08.324 fused_ordering(812) 00:13:08.324 fused_ordering(813) 00:13:08.324 fused_ordering(814) 00:13:08.324 fused_ordering(815) 00:13:08.324 fused_ordering(816) 00:13:08.324 fused_ordering(817) 00:13:08.324 fused_ordering(818) 00:13:08.324 fused_ordering(819) 00:13:08.324 fused_ordering(820) 00:13:09.282 fused_ordering(821) 00:13:09.282 fused_ordering(822) 00:13:09.282 fused_ordering(823) 00:13:09.282 fused_ordering(824) 00:13:09.282 fused_ordering(825) 00:13:09.282 fused_ordering(826) 00:13:09.282 fused_ordering(827) 00:13:09.282 fused_ordering(828) 00:13:09.282 fused_ordering(829) 00:13:09.282 fused_ordering(830) 00:13:09.282 fused_ordering(831) 00:13:09.282 fused_ordering(832) 00:13:09.282 fused_ordering(833) 00:13:09.282 fused_ordering(834) 00:13:09.282 fused_ordering(835) 00:13:09.282 fused_ordering(836) 00:13:09.282 fused_ordering(837) 00:13:09.282 fused_ordering(838) 00:13:09.282 fused_ordering(839) 00:13:09.282 fused_ordering(840) 00:13:09.282 fused_ordering(841) 00:13:09.282 fused_ordering(842) 00:13:09.282 fused_ordering(843) 00:13:09.282 fused_ordering(844) 00:13:09.282 fused_ordering(845) 00:13:09.282 fused_ordering(846) 00:13:09.282 fused_ordering(847) 00:13:09.282 fused_ordering(848) 00:13:09.282 fused_ordering(849) 00:13:09.282 fused_ordering(850) 00:13:09.282 fused_ordering(851) 00:13:09.282 fused_ordering(852) 00:13:09.282 fused_ordering(853) 00:13:09.282 fused_ordering(854) 00:13:09.282 fused_ordering(855) 00:13:09.282 fused_ordering(856) 00:13:09.282 fused_ordering(857) 00:13:09.282 fused_ordering(858) 00:13:09.282 fused_ordering(859) 00:13:09.282 fused_ordering(860) 00:13:09.282 fused_ordering(861) 00:13:09.282 fused_ordering(862) 00:13:09.282 fused_ordering(863) 00:13:09.282 fused_ordering(864) 00:13:09.282 fused_ordering(865) 00:13:09.282 fused_ordering(866) 00:13:09.282 fused_ordering(867) 00:13:09.282 fused_ordering(868) 00:13:09.282 fused_ordering(869) 00:13:09.282 fused_ordering(870) 00:13:09.282 fused_ordering(871) 00:13:09.282 fused_ordering(872) 00:13:09.282 fused_ordering(873) 00:13:09.282 fused_ordering(874) 00:13:09.282 fused_ordering(875) 00:13:09.282 fused_ordering(876) 00:13:09.282 fused_ordering(877) 00:13:09.282 fused_ordering(878) 00:13:09.282 fused_ordering(879) 00:13:09.282 fused_ordering(880) 00:13:09.282 fused_ordering(881) 00:13:09.282 fused_ordering(882) 00:13:09.282 fused_ordering(883) 00:13:09.282 fused_ordering(884) 00:13:09.282 fused_ordering(885) 00:13:09.282 fused_ordering(886) 00:13:09.282 fused_ordering(887) 00:13:09.282 fused_ordering(888) 00:13:09.282 fused_ordering(889) 00:13:09.282 fused_ordering(890) 00:13:09.282 fused_ordering(891) 00:13:09.282 fused_ordering(892) 00:13:09.282 fused_ordering(893) 00:13:09.282 fused_ordering(894) 00:13:09.282 fused_ordering(895) 00:13:09.282 fused_ordering(896) 00:13:09.282 fused_ordering(897) 00:13:09.282 fused_ordering(898) 00:13:09.282 fused_ordering(899) 00:13:09.282 fused_ordering(900) 00:13:09.282 fused_ordering(901) 00:13:09.282 fused_ordering(902) 00:13:09.282 fused_ordering(903) 00:13:09.282 fused_ordering(904) 00:13:09.282 fused_ordering(905) 00:13:09.282 fused_ordering(906) 00:13:09.282 fused_ordering(907) 00:13:09.282 fused_ordering(908) 00:13:09.282 fused_ordering(909) 00:13:09.282 fused_ordering(910) 00:13:09.282 fused_ordering(911) 00:13:09.282 fused_ordering(912) 00:13:09.282 fused_ordering(913) 00:13:09.282 fused_ordering(914) 00:13:09.282 fused_ordering(915) 00:13:09.282 fused_ordering(916) 00:13:09.282 fused_ordering(917) 00:13:09.282 fused_ordering(918) 00:13:09.282 fused_ordering(919) 00:13:09.282 fused_ordering(920) 00:13:09.282 fused_ordering(921) 00:13:09.282 fused_ordering(922) 00:13:09.282 fused_ordering(923) 00:13:09.282 fused_ordering(924) 00:13:09.282 fused_ordering(925) 00:13:09.282 fused_ordering(926) 00:13:09.282 fused_ordering(927) 00:13:09.282 fused_ordering(928) 00:13:09.282 fused_ordering(929) 00:13:09.282 fused_ordering(930) 00:13:09.282 fused_ordering(931) 00:13:09.282 fused_ordering(932) 00:13:09.282 fused_ordering(933) 00:13:09.282 fused_ordering(934) 00:13:09.282 fused_ordering(935) 00:13:09.282 fused_ordering(936) 00:13:09.282 fused_ordering(937) 00:13:09.282 fused_ordering(938) 00:13:09.282 fused_ordering(939) 00:13:09.282 fused_ordering(940) 00:13:09.282 fused_ordering(941) 00:13:09.282 fused_ordering(942) 00:13:09.282 fused_ordering(943) 00:13:09.282 fused_ordering(944) 00:13:09.282 fused_ordering(945) 00:13:09.282 fused_ordering(946) 00:13:09.282 fused_ordering(947) 00:13:09.282 fused_ordering(948) 00:13:09.282 fused_ordering(949) 00:13:09.282 fused_ordering(950) 00:13:09.283 fused_ordering(951) 00:13:09.283 fused_ordering(952) 00:13:09.283 fused_ordering(953) 00:13:09.283 fused_ordering(954) 00:13:09.283 fused_ordering(955) 00:13:09.283 fused_ordering(956) 00:13:09.283 fused_ordering(957) 00:13:09.283 fused_ordering(958) 00:13:09.283 fused_ordering(959) 00:13:09.283 fused_ordering(960) 00:13:09.283 fused_ordering(961) 00:13:09.283 fused_ordering(962) 00:13:09.283 fused_ordering(963) 00:13:09.283 fused_ordering(964) 00:13:09.283 fused_ordering(965) 00:13:09.283 fused_ordering(966) 00:13:09.283 fused_ordering(967) 00:13:09.283 fused_ordering(968) 00:13:09.283 fused_ordering(969) 00:13:09.283 fused_ordering(970) 00:13:09.283 fused_ordering(971) 00:13:09.283 fused_ordering(972) 00:13:09.283 fused_ordering(973) 00:13:09.283 fused_ordering(974) 00:13:09.283 fused_ordering(975) 00:13:09.283 fused_ordering(976) 00:13:09.283 fused_ordering(977) 00:13:09.283 fused_ordering(978) 00:13:09.283 fused_ordering(979) 00:13:09.283 fused_ordering(980) 00:13:09.283 fused_ordering(981) 00:13:09.283 fused_ordering(982) 00:13:09.283 fused_ordering(983) 00:13:09.283 fused_ordering(984) 00:13:09.283 fused_ordering(985) 00:13:09.283 fused_ordering(986) 00:13:09.283 fused_ordering(987) 00:13:09.283 fused_ordering(988) 00:13:09.283 fused_ordering(989) 00:13:09.283 fused_ordering(990) 00:13:09.283 fused_ordering(991) 00:13:09.283 fused_ordering(992) 00:13:09.283 fused_ordering(993) 00:13:09.283 fused_ordering(994) 00:13:09.283 fused_ordering(995) 00:13:09.283 fused_ordering(996) 00:13:09.283 fused_ordering(997) 00:13:09.283 fused_ordering(998) 00:13:09.283 fused_ordering(999) 00:13:09.283 fused_ordering(1000) 00:13:09.283 fused_ordering(1001) 00:13:09.283 fused_ordering(1002) 00:13:09.283 fused_ordering(1003) 00:13:09.283 fused_ordering(1004) 00:13:09.283 fused_ordering(1005) 00:13:09.283 fused_ordering(1006) 00:13:09.283 fused_ordering(1007) 00:13:09.283 fused_ordering(1008) 00:13:09.283 fused_ordering(1009) 00:13:09.283 fused_ordering(1010) 00:13:09.283 fused_ordering(1011) 00:13:09.283 fused_ordering(1012) 00:13:09.283 fused_ordering(1013) 00:13:09.283 fused_ordering(1014) 00:13:09.283 fused_ordering(1015) 00:13:09.283 fused_ordering(1016) 00:13:09.283 fused_ordering(1017) 00:13:09.283 fused_ordering(1018) 00:13:09.283 fused_ordering(1019) 00:13:09.283 fused_ordering(1020) 00:13:09.283 fused_ordering(1021) 00:13:09.283 fused_ordering(1022) 00:13:09.283 fused_ordering(1023) 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.283 rmmod nvme_tcp 00:13:09.283 rmmod nvme_fabrics 00:13:09.283 rmmod nvme_keyring 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 31801 ']' 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 31801 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 31801 ']' 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 31801 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 31801 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 31801' 00:13:09.283 killing process with pid 31801 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 31801 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 31801 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.283 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.822 00:13:11.822 real 0m7.561s 00:13:11.822 user 0m5.067s 00:13:11.822 sys 0m3.139s 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.822 ************************************ 00:13:11.822 END TEST nvmf_fused_ordering 00:13:11.822 ************************************ 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.822 ************************************ 00:13:11.822 START TEST nvmf_ns_masking 00:13:11.822 ************************************ 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:11.822 * Looking for test storage... 00:13:11.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.822 --rc genhtml_branch_coverage=1 00:13:11.822 --rc genhtml_function_coverage=1 00:13:11.822 --rc genhtml_legend=1 00:13:11.822 --rc geninfo_all_blocks=1 00:13:11.822 --rc geninfo_unexecuted_blocks=1 00:13:11.822 00:13:11.822 ' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.822 --rc genhtml_branch_coverage=1 00:13:11.822 --rc genhtml_function_coverage=1 00:13:11.822 --rc genhtml_legend=1 00:13:11.822 --rc geninfo_all_blocks=1 00:13:11.822 --rc geninfo_unexecuted_blocks=1 00:13:11.822 00:13:11.822 ' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.822 --rc genhtml_branch_coverage=1 00:13:11.822 --rc genhtml_function_coverage=1 00:13:11.822 --rc genhtml_legend=1 00:13:11.822 --rc geninfo_all_blocks=1 00:13:11.822 --rc geninfo_unexecuted_blocks=1 00:13:11.822 00:13:11.822 ' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.822 --rc genhtml_branch_coverage=1 00:13:11.822 --rc genhtml_function_coverage=1 00:13:11.822 --rc genhtml_legend=1 00:13:11.822 --rc geninfo_all_blocks=1 00:13:11.822 --rc geninfo_unexecuted_blocks=1 00:13:11.822 00:13:11.822 ' 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.822 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f43ac9d9-1b37-4776-90ca-d9c19be66ec6 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3199bb95-96d4-43f9-a5b3-466cf436b878 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=89a94b99-8b51-4858-970b-6258f66d1a8d 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.823 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:13.731 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:13.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:13.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:13.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:13.731 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:13.732 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:13.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:13:13.993 00:13:13.993 --- 10.0.0.2 ping statistics --- 00:13:13.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.993 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:13.993 00:13:13.993 --- 10.0.0.1 ping statistics --- 00:13:13.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.993 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=34144 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 34144 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 34144 ']' 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.993 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:13.993 [2024-12-10 22:44:21.656466] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:13.993 [2024-12-10 22:44:21.656556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.252 [2024-12-10 22:44:21.727657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.252 [2024-12-10 22:44:21.786395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.252 [2024-12-10 22:44:21.786460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.252 [2024-12-10 22:44:21.786488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.252 [2024-12-10 22:44:21.786509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.252 [2024-12-10 22:44:21.786519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.252 [2024-12-10 22:44:21.787212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.252 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.252 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:14.252 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.252 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.252 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.252 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:14.511 [2024-12-10 22:44:22.188633] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.511 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:14.511 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:14.511 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:14.769 Malloc1 00:13:14.769 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:15.336 Malloc2 00:13:15.336 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:15.596 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:15.855 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.219 [2024-12-10 22:44:23.626193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.219 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:16.219 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 89a94b99-8b51-4858-970b-6258f66d1a8d -a 10.0.0.2 -s 4420 -i 4 00:13:16.219 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.219 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.219 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.219 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.219 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.144 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.144 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.144 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.144 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:18.403 [ 0]:0x1 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:18.403 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.403 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7079fc16b1984b8a9b49c836ece55c0b 00:13:18.403 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7079fc16b1984b8a9b49c836ece55c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.403 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:18.661 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:18.661 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.661 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:18.661 [ 0]:0x1 00:13:18.661 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:18.661 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7079fc16b1984b8a9b49c836ece55c0b 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7079fc16b1984b8a9b49c836ece55c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:18.920 [ 1]:0x2 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e22b4f23df643679e7e4343b773b342 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e22b4f23df643679e7e4343b773b342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.920 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.177 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:19.436 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:19.436 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 89a94b99-8b51-4858-970b-6258f66d1a8d -a 10.0.0.2 -s 4420 -i 4 00:13:19.695 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:19.695 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.695 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.695 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:19.695 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:19.695 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:21.603 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:21.862 [ 0]:0x2 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e22b4f23df643679e7e4343b773b342 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e22b4f23df643679e7e4343b773b342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.862 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:22.120 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:22.120 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.120 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:22.120 [ 0]:0x1 00:13:22.120 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7079fc16b1984b8a9b49c836ece55c0b 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7079fc16b1984b8a9b49c836ece55c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:22.121 [ 1]:0x2 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e22b4f23df643679e7e4343b773b342 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e22b4f23df643679e7e4343b773b342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.121 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:22.687 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:22.687 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:22.687 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:22.688 [ 0]:0x2 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e22b4f23df643679e7e4343b773b342 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e22b4f23df643679e7e4343b773b342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.688 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:22.945 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:22.945 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 89a94b99-8b51-4858-970b-6258f66d1a8d -a 10.0.0.2 -s 4420 -i 4 00:13:23.203 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:23.203 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:23.203 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.203 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:23.203 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:23.203 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:25.110 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:25.110 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:25.110 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:25.371 [ 0]:0x1 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7079fc16b1984b8a9b49c836ece55c0b 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7079fc16b1984b8a9b49c836ece55c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.371 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:25.371 [ 1]:0x2 00:13:25.371 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:25.371 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.371 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e22b4f23df643679e7e4343b773b342 00:13:25.371 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e22b4f23df643679e7e4343b773b342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.371 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:25.630 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:25.888 [ 0]:0x2 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e22b4f23df643679e7e4343b773b342 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e22b4f23df643679e7e4343b773b342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.888 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.889 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.889 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.889 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.889 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.889 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:25.889 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:26.148 [2024-12-10 22:44:33.680236] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:26.148 request: 00:13:26.148 { 00:13:26.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.148 "nsid": 2, 00:13:26.148 "host": "nqn.2016-06.io.spdk:host1", 00:13:26.148 "method": "nvmf_ns_remove_host", 00:13:26.148 "req_id": 1 00:13:26.148 } 00:13:26.148 Got JSON-RPC error response 00:13:26.148 response: 00:13:26.148 { 00:13:26.148 "code": -32602, 00:13:26.148 "message": "Invalid parameters" 00:13:26.148 } 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:26.148 [ 0]:0x2 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e22b4f23df643679e7e4343b773b342 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e22b4f23df643679e7e4343b773b342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=35656 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 35656 /var/tmp/host.sock 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 35656 ']' 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:26.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.148 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:26.148 [2024-12-10 22:44:33.876787] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:26.148 [2024-12-10 22:44:33.876893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35656 ] 00:13:26.408 [2024-12-10 22:44:33.943782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.408 [2024-12-10 22:44:34.000993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.667 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.667 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:26.667 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.927 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:27.185 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f43ac9d9-1b37-4776-90ca-d9c19be66ec6 00:13:27.185 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:27.185 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F43AC9D91B37477690CAD9C19BE66EC6 -i 00:13:27.442 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3199bb95-96d4-43f9-a5b3-466cf436b878 00:13:27.442 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:27.442 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3199BB9596D443F9A5B3466CF436B878 -i 00:13:27.700 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:27.958 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:28.216 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:28.216 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:28.784 nvme0n1 00:13:28.784 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:28.784 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:29.042 nvme1n2 00:13:29.042 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:29.042 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:29.042 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:29.042 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:29.042 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:29.300 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:29.300 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:29.301 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:29.301 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:29.558 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f43ac9d9-1b37-4776-90ca-d9c19be66ec6 == \f\4\3\a\c\9\d\9\-\1\b\3\7\-\4\7\7\6\-\9\0\c\a\-\d\9\c\1\9\b\e\6\6\e\c\6 ]] 00:13:29.558 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:29.558 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:29.558 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:29.816 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3199bb95-96d4-43f9-a5b3-466cf436b878 == \3\1\9\9\b\b\9\5\-\9\6\d\4\-\4\3\f\9\-\a\5\b\3\-\4\6\6\c\f\4\3\6\b\8\7\8 ]] 00:13:29.816 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.074 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f43ac9d9-1b37-4776-90ca-d9c19be66ec6 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F43AC9D91B37477690CAD9C19BE66EC6 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F43AC9D91B37477690CAD9C19BE66EC6 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:30.332 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F43AC9D91B37477690CAD9C19BE66EC6 00:13:30.590 [2024-12-10 22:44:38.245588] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:30.590 [2024-12-10 22:44:38.245627] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:30.590 [2024-12-10 22:44:38.245657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.590 request: 00:13:30.590 { 00:13:30.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.590 "namespace": { 00:13:30.590 "bdev_name": "invalid", 00:13:30.590 "nsid": 1, 00:13:30.590 "nguid": "F43AC9D91B37477690CAD9C19BE66EC6", 00:13:30.590 "no_auto_visible": false, 00:13:30.590 "hide_metadata": false 00:13:30.590 }, 00:13:30.590 "method": "nvmf_subsystem_add_ns", 00:13:30.590 "req_id": 1 00:13:30.590 } 00:13:30.590 Got JSON-RPC error response 00:13:30.590 response: 00:13:30.590 { 00:13:30.590 "code": -32602, 00:13:30.590 "message": "Invalid parameters" 00:13:30.590 } 00:13:30.590 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:30.590 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.590 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.590 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.590 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f43ac9d9-1b37-4776-90ca-d9c19be66ec6 00:13:30.590 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:30.590 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F43AC9D91B37477690CAD9C19BE66EC6 -i 00:13:30.849 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 35656 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 35656 ']' 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 35656 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 35656 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 35656' 00:13:33.389 killing process with pid 35656 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 35656 00:13:33.389 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 35656 00:13:33.649 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.909 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.909 rmmod nvme_tcp 00:13:33.909 rmmod nvme_fabrics 00:13:34.168 rmmod nvme_keyring 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 34144 ']' 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 34144 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 34144 ']' 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 34144 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 34144 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 34144' 00:13:34.168 killing process with pid 34144 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 34144 00:13:34.168 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 34144 00:13:34.426 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.426 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.426 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.427 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.336 00:13:36.336 real 0m24.938s 00:13:36.336 user 0m36.045s 00:13:36.336 sys 0m4.683s 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:36.336 ************************************ 00:13:36.336 END TEST nvmf_ns_masking 00:13:36.336 ************************************ 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.336 ************************************ 00:13:36.336 START TEST nvmf_nvme_cli 00:13:36.336 ************************************ 00:13:36.336 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:36.595 * Looking for test storage... 00:13:36.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:36.595 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:36.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.596 --rc genhtml_branch_coverage=1 00:13:36.596 --rc genhtml_function_coverage=1 00:13:36.596 --rc genhtml_legend=1 00:13:36.596 --rc geninfo_all_blocks=1 00:13:36.596 --rc geninfo_unexecuted_blocks=1 00:13:36.596 00:13:36.596 ' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:36.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.596 --rc genhtml_branch_coverage=1 00:13:36.596 --rc genhtml_function_coverage=1 00:13:36.596 --rc genhtml_legend=1 00:13:36.596 --rc geninfo_all_blocks=1 00:13:36.596 --rc geninfo_unexecuted_blocks=1 00:13:36.596 00:13:36.596 ' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:36.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.596 --rc genhtml_branch_coverage=1 00:13:36.596 --rc genhtml_function_coverage=1 00:13:36.596 --rc genhtml_legend=1 00:13:36.596 --rc geninfo_all_blocks=1 00:13:36.596 --rc geninfo_unexecuted_blocks=1 00:13:36.596 00:13:36.596 ' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:36.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.596 --rc genhtml_branch_coverage=1 00:13:36.596 --rc genhtml_function_coverage=1 00:13:36.596 --rc genhtml_legend=1 00:13:36.596 --rc geninfo_all_blocks=1 00:13:36.596 --rc geninfo_unexecuted_blocks=1 00:13:36.596 00:13:36.596 ' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.596 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:39.131 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.131 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:39.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:39.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:39.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:13:39.132 00:13:39.132 --- 10.0.0.2 ping statistics --- 00:13:39.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.132 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:13:39.132 00:13:39.132 --- 10.0.0.1 ping statistics --- 00:13:39.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.132 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=38687 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 38687 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 38687 ']' 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.132 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.132 [2024-12-10 22:44:46.717727] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:39.132 [2024-12-10 22:44:46.717828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.132 [2024-12-10 22:44:46.792206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.132 [2024-12-10 22:44:46.853800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.132 [2024-12-10 22:44:46.853873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.132 [2024-12-10 22:44:46.853901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.132 [2024-12-10 22:44:46.853912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.132 [2024-12-10 22:44:46.853926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.132 [2024-12-10 22:44:46.855768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.132 [2024-12-10 22:44:46.855832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.132 [2024-12-10 22:44:46.855888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.132 [2024-12-10 22:44:46.855892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.391 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.391 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:39.391 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.391 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.391 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 [2024-12-10 22:44:47.011556] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 Malloc0 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 Malloc1 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.391 [2024-12-10 22:44:47.112572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.391 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.651 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.651 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:39.651 00:13:39.651 Discovery Log Number of Records 2, Generation counter 2 00:13:39.651 =====Discovery Log Entry 0====== 00:13:39.651 trtype: tcp 00:13:39.651 adrfam: ipv4 00:13:39.651 subtype: current discovery subsystem 00:13:39.651 treq: not required 00:13:39.651 portid: 0 00:13:39.651 trsvcid: 4420 00:13:39.651 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:39.651 traddr: 10.0.0.2 00:13:39.651 eflags: explicit discovery connections, duplicate discovery information 00:13:39.651 sectype: none 00:13:39.651 =====Discovery Log Entry 1====== 00:13:39.651 trtype: tcp 00:13:39.651 adrfam: ipv4 00:13:39.651 subtype: nvme subsystem 00:13:39.651 treq: not required 00:13:39.651 portid: 0 00:13:39.651 trsvcid: 4420 00:13:39.651 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:39.651 traddr: 10.0.0.2 00:13:39.651 eflags: none 00:13:39.651 sectype: none 00:13:39.651 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:39.651 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:39.651 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:39.651 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.652 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:39.652 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:39.652 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.652 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:39.652 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.652 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:39.652 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.220 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:40.220 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.220 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.220 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:40.220 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:40.220 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.757 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.757 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.757 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:42.758 /dev/nvme0n2 ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:42.758 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.758 rmmod nvme_tcp 00:13:42.758 rmmod nvme_fabrics 00:13:42.758 rmmod nvme_keyring 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 38687 ']' 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 38687 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 38687 ']' 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 38687 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38687 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38687' 00:13:42.758 killing process with pid 38687 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 38687 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 38687 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.758 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:45.299 00:13:45.299 real 0m8.367s 00:13:45.299 user 0m14.755s 00:13:45.299 sys 0m2.401s 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:45.299 ************************************ 00:13:45.299 END TEST nvmf_nvme_cli 00:13:45.299 ************************************ 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.299 ************************************ 00:13:45.299 START TEST nvmf_vfio_user 00:13:45.299 ************************************ 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:45.299 * Looking for test storage... 00:13:45.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.299 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:45.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.300 --rc genhtml_branch_coverage=1 00:13:45.300 --rc genhtml_function_coverage=1 00:13:45.300 --rc genhtml_legend=1 00:13:45.300 --rc geninfo_all_blocks=1 00:13:45.300 --rc geninfo_unexecuted_blocks=1 00:13:45.300 00:13:45.300 ' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:45.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.300 --rc genhtml_branch_coverage=1 00:13:45.300 --rc genhtml_function_coverage=1 00:13:45.300 --rc genhtml_legend=1 00:13:45.300 --rc geninfo_all_blocks=1 00:13:45.300 --rc geninfo_unexecuted_blocks=1 00:13:45.300 00:13:45.300 ' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:45.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.300 --rc genhtml_branch_coverage=1 00:13:45.300 --rc genhtml_function_coverage=1 00:13:45.300 --rc genhtml_legend=1 00:13:45.300 --rc geninfo_all_blocks=1 00:13:45.300 --rc geninfo_unexecuted_blocks=1 00:13:45.300 00:13:45.300 ' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:45.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.300 --rc genhtml_branch_coverage=1 00:13:45.300 --rc genhtml_function_coverage=1 00:13:45.300 --rc genhtml_legend=1 00:13:45.300 --rc geninfo_all_blocks=1 00:13:45.300 --rc geninfo_unexecuted_blocks=1 00:13:45.300 00:13:45.300 ' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=39499 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 39499' 00:13:45.300 Process pid: 39499 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 39499 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 39499 ']' 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.300 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:45.300 [2024-12-10 22:44:52.730483] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:45.300 [2024-12-10 22:44:52.730633] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.300 [2024-12-10 22:44:52.796388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.300 [2024-12-10 22:44:52.852058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.300 [2024-12-10 22:44:52.852113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.300 [2024-12-10 22:44:52.852141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.300 [2024-12-10 22:44:52.852158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.300 [2024-12-10 22:44:52.852168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.300 [2024-12-10 22:44:52.853580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.301 [2024-12-10 22:44:52.853644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.301 [2024-12-10 22:44:52.853714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.301 [2024-12-10 22:44:52.853709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.301 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.301 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:45.301 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:46.679 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:46.679 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:46.679 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:46.679 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:46.679 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:46.679 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:46.937 Malloc1 00:13:46.937 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:47.195 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:47.452 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:47.708 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:47.708 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:47.708 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:47.967 Malloc2 00:13:48.226 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:48.484 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:48.742 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:49.002 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:49.002 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:49.003 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.003 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:49.003 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:49.003 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:49.003 [2024-12-10 22:44:56.541187] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:49.003 [2024-12-10 22:44:56.541230] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39931 ] 00:13:49.003 [2024-12-10 22:44:56.592748] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:49.003 [2024-12-10 22:44:56.597919] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.003 [2024-12-10 22:44:56.597950] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd96d38b000 00:13:49.003 [2024-12-10 22:44:56.598899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.599897] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.600903] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.601904] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.602928] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.603917] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.604918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.605922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.003 [2024-12-10 22:44:56.606926] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.003 [2024-12-10 22:44:56.606945] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd96d380000 00:13:49.003 [2024-12-10 22:44:56.608098] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.003 [2024-12-10 22:44:56.623272] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:49.003 [2024-12-10 22:44:56.623319] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:49.003 [2024-12-10 22:44:56.628033] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:49.003 [2024-12-10 22:44:56.628093] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:49.003 [2024-12-10 22:44:56.628195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:49.003 [2024-12-10 22:44:56.628227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:49.003 [2024-12-10 22:44:56.628239] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:49.003 [2024-12-10 22:44:56.629030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:49.003 [2024-12-10 22:44:56.629054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:49.003 [2024-12-10 22:44:56.629067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:49.003 [2024-12-10 22:44:56.630029] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:49.003 [2024-12-10 22:44:56.630049] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:49.003 [2024-12-10 22:44:56.630063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:49.003 [2024-12-10 22:44:56.631036] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:49.003 [2024-12-10 22:44:56.631056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:49.003 [2024-12-10 22:44:56.632039] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:49.003 [2024-12-10 22:44:56.632058] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:49.003 [2024-12-10 22:44:56.632068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:49.003 [2024-12-10 22:44:56.632080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:49.003 [2024-12-10 22:44:56.632189] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:49.003 [2024-12-10 22:44:56.632198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:49.003 [2024-12-10 22:44:56.632207] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:49.003 [2024-12-10 22:44:56.633556] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:49.003 [2024-12-10 22:44:56.634049] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:49.003 [2024-12-10 22:44:56.635057] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:49.003 [2024-12-10 22:44:56.636054] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:49.003 [2024-12-10 22:44:56.636205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:49.003 [2024-12-10 22:44:56.637085] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:49.003 [2024-12-10 22:44:56.637104] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:49.003 [2024-12-10 22:44:56.637114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:49.003 [2024-12-10 22:44:56.637137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:49.003 [2024-12-10 22:44:56.637151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:49.003 [2024-12-10 22:44:56.637184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.003 [2024-12-10 22:44:56.637195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.003 [2024-12-10 22:44:56.637201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.003 [2024-12-10 22:44:56.637222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.003 [2024-12-10 22:44:56.637292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:49.003 [2024-12-10 22:44:56.637311] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:49.003 [2024-12-10 22:44:56.637319] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:49.003 [2024-12-10 22:44:56.637326] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:49.003 [2024-12-10 22:44:56.637335] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:49.003 [2024-12-10 22:44:56.637342] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:49.003 [2024-12-10 22:44:56.637349] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:49.003 [2024-12-10 22:44:56.637357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:49.003 [2024-12-10 22:44:56.637373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.637408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.637424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.004 [2024-12-10 22:44:56.637436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.004 [2024-12-10 22:44:56.637447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.004 [2024-12-10 22:44:56.637459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.004 [2024-12-10 22:44:56.637467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.637508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.637519] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:49.004 [2024-12-10 22:44:56.637543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.637613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.637680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637716] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:49.004 [2024-12-10 22:44:56.637725] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:49.004 [2024-12-10 22:44:56.637731] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.004 [2024-12-10 22:44:56.637741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.637756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.637781] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:49.004 [2024-12-10 22:44:56.637799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637829] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.004 [2024-12-10 22:44:56.637837] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.004 [2024-12-10 22:44:56.637858] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.004 [2024-12-10 22:44:56.637868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.637920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.637944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.637973] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.004 [2024-12-10 22:44:56.637981] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.004 [2024-12-10 22:44:56.637987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.004 [2024-12-10 22:44:56.637996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.638040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.638054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.638065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.638074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.638082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.638091] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:49.004 [2024-12-10 22:44:56.638098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:49.004 [2024-12-10 22:44:56.638107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:49.004 [2024-12-10 22:44:56.638133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638257] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:49.004 [2024-12-10 22:44:56.638267] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:49.004 [2024-12-10 22:44:56.638274] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:49.004 [2024-12-10 22:44:56.638280] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:49.004 [2024-12-10 22:44:56.638287] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:49.004 [2024-12-10 22:44:56.638296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:49.004 [2024-12-10 22:44:56.638310] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:49.004 [2024-12-10 22:44:56.638320] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:49.004 [2024-12-10 22:44:56.638327] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.004 [2024-12-10 22:44:56.638336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638351] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:49.004 [2024-12-10 22:44:56.638360] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.004 [2024-12-10 22:44:56.638365] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.004 [2024-12-10 22:44:56.638374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638386] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:49.004 [2024-12-10 22:44:56.638394] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:49.004 [2024-12-10 22:44:56.638399] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.004 [2024-12-10 22:44:56.638408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:49.004 [2024-12-10 22:44:56.638419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:49.004 [2024-12-10 22:44:56.638469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:49.004 ===================================================== 00:13:49.004 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:49.004 ===================================================== 00:13:49.004 Controller Capabilities/Features 00:13:49.004 ================================ 00:13:49.004 Vendor ID: 4e58 00:13:49.004 Subsystem Vendor ID: 4e58 00:13:49.004 Serial Number: SPDK1 00:13:49.004 Model Number: SPDK bdev Controller 00:13:49.004 Firmware Version: 25.01 00:13:49.004 Recommended Arb Burst: 6 00:13:49.004 IEEE OUI Identifier: 8d 6b 50 00:13:49.004 Multi-path I/O 00:13:49.005 May have multiple subsystem ports: Yes 00:13:49.005 May have multiple controllers: Yes 00:13:49.005 Associated with SR-IOV VF: No 00:13:49.005 Max Data Transfer Size: 131072 00:13:49.005 Max Number of Namespaces: 32 00:13:49.005 Max Number of I/O Queues: 127 00:13:49.005 NVMe Specification Version (VS): 1.3 00:13:49.005 NVMe Specification Version (Identify): 1.3 00:13:49.005 Maximum Queue Entries: 256 00:13:49.005 Contiguous Queues Required: Yes 00:13:49.005 Arbitration Mechanisms Supported 00:13:49.005 Weighted Round Robin: Not Supported 00:13:49.005 Vendor Specific: Not Supported 00:13:49.005 Reset Timeout: 15000 ms 00:13:49.005 Doorbell Stride: 4 bytes 00:13:49.005 NVM Subsystem Reset: Not Supported 00:13:49.005 Command Sets Supported 00:13:49.005 NVM Command Set: Supported 00:13:49.005 Boot Partition: Not Supported 00:13:49.005 Memory Page Size Minimum: 4096 bytes 00:13:49.005 Memory Page Size Maximum: 4096 bytes 00:13:49.005 Persistent Memory Region: Not Supported 00:13:49.005 Optional Asynchronous Events Supported 00:13:49.005 Namespace Attribute Notices: Supported 00:13:49.005 Firmware Activation Notices: Not Supported 00:13:49.005 ANA Change Notices: Not Supported 00:13:49.005 PLE Aggregate Log Change Notices: Not Supported 00:13:49.005 LBA Status Info Alert Notices: Not Supported 00:13:49.005 EGE Aggregate Log Change Notices: Not Supported 00:13:49.005 Normal NVM Subsystem Shutdown event: Not Supported 00:13:49.005 Zone Descriptor Change Notices: Not Supported 00:13:49.005 Discovery Log Change Notices: Not Supported 00:13:49.005 Controller Attributes 00:13:49.005 128-bit Host Identifier: Supported 00:13:49.005 Non-Operational Permissive Mode: Not Supported 00:13:49.005 NVM Sets: Not Supported 00:13:49.005 Read Recovery Levels: Not Supported 00:13:49.005 Endurance Groups: Not Supported 00:13:49.005 Predictable Latency Mode: Not Supported 00:13:49.005 Traffic Based Keep ALive: Not Supported 00:13:49.005 Namespace Granularity: Not Supported 00:13:49.005 SQ Associations: Not Supported 00:13:49.005 UUID List: Not Supported 00:13:49.005 Multi-Domain Subsystem: Not Supported 00:13:49.005 Fixed Capacity Management: Not Supported 00:13:49.005 Variable Capacity Management: Not Supported 00:13:49.005 Delete Endurance Group: Not Supported 00:13:49.005 Delete NVM Set: Not Supported 00:13:49.005 Extended LBA Formats Supported: Not Supported 00:13:49.005 Flexible Data Placement Supported: Not Supported 00:13:49.005 00:13:49.005 Controller Memory Buffer Support 00:13:49.005 ================================ 00:13:49.005 Supported: No 00:13:49.005 00:13:49.005 Persistent Memory Region Support 00:13:49.005 ================================ 00:13:49.005 Supported: No 00:13:49.005 00:13:49.005 Admin Command Set Attributes 00:13:49.005 ============================ 00:13:49.005 Security Send/Receive: Not Supported 00:13:49.005 Format NVM: Not Supported 00:13:49.005 Firmware Activate/Download: Not Supported 00:13:49.005 Namespace Management: Not Supported 00:13:49.005 Device Self-Test: Not Supported 00:13:49.005 Directives: Not Supported 00:13:49.005 NVMe-MI: Not Supported 00:13:49.005 Virtualization Management: Not Supported 00:13:49.005 Doorbell Buffer Config: Not Supported 00:13:49.005 Get LBA Status Capability: Not Supported 00:13:49.005 Command & Feature Lockdown Capability: Not Supported 00:13:49.005 Abort Command Limit: 4 00:13:49.005 Async Event Request Limit: 4 00:13:49.005 Number of Firmware Slots: N/A 00:13:49.005 Firmware Slot 1 Read-Only: N/A 00:13:49.005 Firmware Activation Without Reset: N/A 00:13:49.005 Multiple Update Detection Support: N/A 00:13:49.005 Firmware Update Granularity: No Information Provided 00:13:49.005 Per-Namespace SMART Log: No 00:13:49.005 Asymmetric Namespace Access Log Page: Not Supported 00:13:49.005 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:49.005 Command Effects Log Page: Supported 00:13:49.005 Get Log Page Extended Data: Supported 00:13:49.005 Telemetry Log Pages: Not Supported 00:13:49.005 Persistent Event Log Pages: Not Supported 00:13:49.005 Supported Log Pages Log Page: May Support 00:13:49.005 Commands Supported & Effects Log Page: Not Supported 00:13:49.005 Feature Identifiers & Effects Log Page:May Support 00:13:49.005 NVMe-MI Commands & Effects Log Page: May Support 00:13:49.005 Data Area 4 for Telemetry Log: Not Supported 00:13:49.005 Error Log Page Entries Supported: 128 00:13:49.005 Keep Alive: Supported 00:13:49.005 Keep Alive Granularity: 10000 ms 00:13:49.005 00:13:49.005 NVM Command Set Attributes 00:13:49.005 ========================== 00:13:49.005 Submission Queue Entry Size 00:13:49.005 Max: 64 00:13:49.005 Min: 64 00:13:49.005 Completion Queue Entry Size 00:13:49.005 Max: 16 00:13:49.005 Min: 16 00:13:49.005 Number of Namespaces: 32 00:13:49.005 Compare Command: Supported 00:13:49.005 Write Uncorrectable Command: Not Supported 00:13:49.005 Dataset Management Command: Supported 00:13:49.005 Write Zeroes Command: Supported 00:13:49.005 Set Features Save Field: Not Supported 00:13:49.005 Reservations: Not Supported 00:13:49.005 Timestamp: Not Supported 00:13:49.005 Copy: Supported 00:13:49.005 Volatile Write Cache: Present 00:13:49.005 Atomic Write Unit (Normal): 1 00:13:49.005 Atomic Write Unit (PFail): 1 00:13:49.005 Atomic Compare & Write Unit: 1 00:13:49.005 Fused Compare & Write: Supported 00:13:49.005 Scatter-Gather List 00:13:49.005 SGL Command Set: Supported (Dword aligned) 00:13:49.005 SGL Keyed: Not Supported 00:13:49.006 SGL Bit Bucket Descriptor: Not Supported 00:13:49.006 SGL Metadata Pointer: Not Supported 00:13:49.006 Oversized SGL: Not Supported 00:13:49.006 SGL Metadata Address: Not Supported 00:13:49.006 SGL Offset: Not Supported 00:13:49.006 Transport SGL Data Block: Not Supported 00:13:49.006 Replay Protected Memory Block: Not Supported 00:13:49.006 00:13:49.006 Firmware Slot Information 00:13:49.006 ========================= 00:13:49.006 Active slot: 1 00:13:49.006 Slot 1 Firmware Revision: 25.01 00:13:49.006 00:13:49.006 00:13:49.006 Commands Supported and Effects 00:13:49.006 ============================== 00:13:49.006 Admin Commands 00:13:49.006 -------------- 00:13:49.006 Get Log Page (02h): Supported 00:13:49.006 Identify (06h): Supported 00:13:49.006 Abort (08h): Supported 00:13:49.006 Set Features (09h): Supported 00:13:49.006 Get Features (0Ah): Supported 00:13:49.006 Asynchronous Event Request (0Ch): Supported 00:13:49.006 Keep Alive (18h): Supported 00:13:49.006 I/O Commands 00:13:49.006 ------------ 00:13:49.006 Flush (00h): Supported LBA-Change 00:13:49.006 Write (01h): Supported LBA-Change 00:13:49.006 Read (02h): Supported 00:13:49.006 Compare (05h): Supported 00:13:49.006 Write Zeroes (08h): Supported LBA-Change 00:13:49.006 Dataset Management (09h): Supported LBA-Change 00:13:49.006 Copy (19h): Supported LBA-Change 00:13:49.006 00:13:49.006 Error Log 00:13:49.006 ========= 00:13:49.006 00:13:49.006 Arbitration 00:13:49.006 =========== 00:13:49.006 Arbitration Burst: 1 00:13:49.006 00:13:49.006 Power Management 00:13:49.006 ================ 00:13:49.006 Number of Power States: 1 00:13:49.006 Current Power State: Power State #0 00:13:49.006 Power State #0: 00:13:49.006 Max Power: 0.00 W 00:13:49.006 Non-Operational State: Operational 00:13:49.006 Entry Latency: Not Reported 00:13:49.006 Exit Latency: Not Reported 00:13:49.006 Relative Read Throughput: 0 00:13:49.006 Relative Read Latency: 0 00:13:49.006 Relative Write Throughput: 0 00:13:49.006 Relative Write Latency: 0 00:13:49.006 Idle Power: Not Reported 00:13:49.006 Active Power: Not Reported 00:13:49.006 Non-Operational Permissive Mode: Not Supported 00:13:49.006 00:13:49.006 Health Information 00:13:49.006 ================== 00:13:49.006 Critical Warnings: 00:13:49.006 Available Spare Space: OK 00:13:49.006 Temperature: OK 00:13:49.006 Device Reliability: OK 00:13:49.006 Read Only: No 00:13:49.006 Volatile Memory Backup: OK 00:13:49.006 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:49.006 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:49.006 Available Spare: 0% 00:13:49.006 Available Sp[2024-12-10 22:44:56.638612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:49.006 [2024-12-10 22:44:56.638630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:49.006 [2024-12-10 22:44:56.638675] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:49.006 [2024-12-10 22:44:56.638696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.006 [2024-12-10 22:44:56.638708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.006 [2024-12-10 22:44:56.638718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.006 [2024-12-10 22:44:56.638728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.006 [2024-12-10 22:44:56.642560] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:49.006 [2024-12-10 22:44:56.642585] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:49.006 [2024-12-10 22:44:56.643104] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:49.006 [2024-12-10 22:44:56.643190] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:49.006 [2024-12-10 22:44:56.643204] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:49.006 [2024-12-10 22:44:56.644108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:49.006 [2024-12-10 22:44:56.644132] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:49.006 [2024-12-10 22:44:56.644192] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:49.006 [2024-12-10 22:44:56.646154] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.006 are Threshold: 0% 00:13:49.006 Life Percentage Used: 0% 00:13:49.006 Data Units Read: 0 00:13:49.006 Data Units Written: 0 00:13:49.006 Host Read Commands: 0 00:13:49.006 Host Write Commands: 0 00:13:49.006 Controller Busy Time: 0 minutes 00:13:49.006 Power Cycles: 0 00:13:49.006 Power On Hours: 0 hours 00:13:49.006 Unsafe Shutdowns: 0 00:13:49.006 Unrecoverable Media Errors: 0 00:13:49.006 Lifetime Error Log Entries: 0 00:13:49.006 Warning Temperature Time: 0 minutes 00:13:49.006 Critical Temperature Time: 0 minutes 00:13:49.006 00:13:49.006 Number of Queues 00:13:49.006 ================ 00:13:49.006 Number of I/O Submission Queues: 127 00:13:49.006 Number of I/O Completion Queues: 127 00:13:49.006 00:13:49.006 Active Namespaces 00:13:49.006 ================= 00:13:49.006 Namespace ID:1 00:13:49.006 Error Recovery Timeout: Unlimited 00:13:49.006 Command Set Identifier: NVM (00h) 00:13:49.006 Deallocate: Supported 00:13:49.006 Deallocated/Unwritten Error: Not Supported 00:13:49.006 Deallocated Read Value: Unknown 00:13:49.006 Deallocate in Write Zeroes: Not Supported 00:13:49.006 Deallocated Guard Field: 0xFFFF 00:13:49.006 Flush: Supported 00:13:49.006 Reservation: Supported 00:13:49.006 Namespace Sharing Capabilities: Multiple Controllers 00:13:49.006 Size (in LBAs): 131072 (0GiB) 00:13:49.006 Capacity (in LBAs): 131072 (0GiB) 00:13:49.006 Utilization (in LBAs): 131072 (0GiB) 00:13:49.006 NGUID: 58187BAD49984DB8ADF2527C733D4387 00:13:49.006 UUID: 58187bad-4998-4db8-adf2-527c733d4387 00:13:49.006 Thin Provisioning: Not Supported 00:13:49.006 Per-NS Atomic Units: Yes 00:13:49.006 Atomic Boundary Size (Normal): 0 00:13:49.006 Atomic Boundary Size (PFail): 0 00:13:49.006 Atomic Boundary Offset: 0 00:13:49.006 Maximum Single Source Range Length: 65535 00:13:49.006 Maximum Copy Length: 65535 00:13:49.006 Maximum Source Range Count: 1 00:13:49.006 NGUID/EUI64 Never Reused: No 00:13:49.006 Namespace Write Protected: No 00:13:49.006 Number of LBA Formats: 1 00:13:49.006 Current LBA Format: LBA Format #00 00:13:49.006 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.006 00:13:49.006 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:49.266 [2024-12-10 22:44:56.896422] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:54.543 Initializing NVMe Controllers 00:13:54.543 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:54.543 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:54.543 Initialization complete. Launching workers. 00:13:54.543 ======================================================== 00:13:54.543 Latency(us) 00:13:54.543 Device Information : IOPS MiB/s Average min max 00:13:54.543 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30767.89 120.19 4159.07 1208.92 10360.94 00:13:54.543 ======================================================== 00:13:54.543 Total : 30767.89 120.19 4159.07 1208.92 10360.94 00:13:54.543 00:13:54.543 [2024-12-10 22:45:01.914399] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:54.543 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:54.543 [2024-12-10 22:45:02.166565] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.814 Initializing NVMe Controllers 00:13:59.814 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:59.814 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:59.814 Initialization complete. Launching workers. 00:13:59.814 ======================================================== 00:13:59.814 Latency(us) 00:13:59.814 Device Information : IOPS MiB/s Average min max 00:13:59.814 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15947.40 62.29 8031.59 6013.38 15971.41 00:13:59.814 ======================================================== 00:13:59.814 Total : 15947.40 62.29 8031.59 6013.38 15971.41 00:13:59.814 00:13:59.814 [2024-12-10 22:45:07.208154] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.814 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:59.814 [2024-12-10 22:45:07.431225] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.086 [2024-12-10 22:45:12.515921] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.086 Initializing NVMe Controllers 00:14:05.086 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:05.086 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:05.086 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:05.086 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:05.086 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:05.086 Initialization complete. Launching workers. 00:14:05.086 Starting thread on core 2 00:14:05.086 Starting thread on core 3 00:14:05.086 Starting thread on core 1 00:14:05.087 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:05.346 [2024-12-10 22:45:12.831263] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:08.634 [2024-12-10 22:45:15.900828] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:08.634 Initializing NVMe Controllers 00:14:08.634 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:08.634 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:08.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:08.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:08.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:08.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:08.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:08.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:08.634 Initialization complete. Launching workers. 00:14:08.634 Starting thread on core 1 with urgent priority queue 00:14:08.634 Starting thread on core 2 with urgent priority queue 00:14:08.634 Starting thread on core 3 with urgent priority queue 00:14:08.634 Starting thread on core 0 with urgent priority queue 00:14:08.634 SPDK bdev Controller (SPDK1 ) core 0: 4877.67 IO/s 20.50 secs/100000 ios 00:14:08.634 SPDK bdev Controller (SPDK1 ) core 1: 4993.67 IO/s 20.03 secs/100000 ios 00:14:08.634 SPDK bdev Controller (SPDK1 ) core 2: 5382.00 IO/s 18.58 secs/100000 ios 00:14:08.634 SPDK bdev Controller (SPDK1 ) core 3: 5422.00 IO/s 18.44 secs/100000 ios 00:14:08.634 ======================================================== 00:14:08.634 00:14:08.634 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:08.634 [2024-12-10 22:45:16.222591] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:08.634 Initializing NVMe Controllers 00:14:08.634 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:08.634 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:08.634 Namespace ID: 1 size: 0GB 00:14:08.634 Initialization complete. 00:14:08.634 INFO: using host memory buffer for IO 00:14:08.634 Hello world! 00:14:08.634 [2024-12-10 22:45:16.259266] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:08.634 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:08.913 [2024-12-10 22:45:16.573987] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:09.909 Initializing NVMe Controllers 00:14:09.909 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:09.909 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:09.909 Initialization complete. Launching workers. 00:14:09.909 submit (in ns) avg, min, max = 7303.8, 3487.8, 6994205.6 00:14:09.909 complete (in ns) avg, min, max = 26172.1, 2064.4, 4024577.8 00:14:09.909 00:14:09.909 Submit histogram 00:14:09.909 ================ 00:14:09.909 Range in us Cumulative Count 00:14:09.909 3.484 - 3.508: 0.1488% ( 19) 00:14:09.909 3.508 - 3.532: 0.7595% ( 78) 00:14:09.909 3.532 - 3.556: 2.5213% ( 225) 00:14:09.909 3.556 - 3.579: 6.2016% ( 470) 00:14:09.909 3.579 - 3.603: 13.0060% ( 869) 00:14:09.909 3.603 - 3.627: 21.6819% ( 1108) 00:14:09.909 3.627 - 3.650: 31.5637% ( 1262) 00:14:09.909 3.650 - 3.674: 40.1613% ( 1098) 00:14:09.909 3.674 - 3.698: 47.4513% ( 931) 00:14:09.909 3.698 - 3.721: 54.0130% ( 838) 00:14:09.909 3.721 - 3.745: 59.2514% ( 669) 00:14:09.909 3.745 - 3.769: 63.5502% ( 549) 00:14:09.909 3.769 - 3.793: 67.1600% ( 461) 00:14:09.909 3.793 - 3.816: 70.3156% ( 403) 00:14:09.909 3.816 - 3.840: 73.3145% ( 383) 00:14:09.909 3.840 - 3.864: 76.7912% ( 444) 00:14:09.909 3.864 - 3.887: 80.1503% ( 429) 00:14:09.909 3.887 - 3.911: 83.0162% ( 366) 00:14:09.909 3.911 - 3.935: 85.2713% ( 288) 00:14:09.909 3.935 - 3.959: 86.8530% ( 202) 00:14:09.909 3.959 - 3.982: 88.5600% ( 218) 00:14:09.909 3.982 - 4.006: 90.2279% ( 213) 00:14:09.909 4.006 - 4.030: 91.3867% ( 148) 00:14:09.909 4.030 - 4.053: 92.3733% ( 126) 00:14:09.909 4.053 - 4.077: 93.2034% ( 106) 00:14:09.909 4.077 - 4.101: 93.8454% ( 82) 00:14:09.909 4.101 - 4.124: 94.2135% ( 47) 00:14:09.909 4.124 - 4.148: 94.5815% ( 47) 00:14:09.909 4.148 - 4.172: 94.8634% ( 36) 00:14:09.909 4.172 - 4.196: 95.0748% ( 27) 00:14:09.909 4.196 - 4.219: 95.3645% ( 37) 00:14:09.910 4.219 - 4.243: 95.5916% ( 29) 00:14:09.910 4.243 - 4.267: 95.7169% ( 16) 00:14:09.910 4.267 - 4.290: 95.8265% ( 14) 00:14:09.910 4.290 - 4.314: 95.9518% ( 16) 00:14:09.910 4.314 - 4.338: 96.1005% ( 19) 00:14:09.910 4.338 - 4.361: 96.2258% ( 16) 00:14:09.910 4.361 - 4.385: 96.3120% ( 11) 00:14:09.910 4.385 - 4.409: 96.3824% ( 9) 00:14:09.910 4.409 - 4.433: 96.4137% ( 4) 00:14:09.910 4.433 - 4.456: 96.4686% ( 7) 00:14:09.910 4.456 - 4.480: 96.5155% ( 6) 00:14:09.910 4.480 - 4.504: 96.5625% ( 6) 00:14:09.910 4.504 - 4.527: 96.5938% ( 4) 00:14:09.910 4.527 - 4.551: 96.6408% ( 6) 00:14:09.910 4.551 - 4.575: 96.6800% ( 5) 00:14:09.910 4.575 - 4.599: 96.7348% ( 7) 00:14:09.910 4.599 - 4.622: 96.7661% ( 4) 00:14:09.910 4.622 - 4.646: 96.8679% ( 13) 00:14:09.910 4.646 - 4.670: 96.9227% ( 7) 00:14:09.910 4.670 - 4.693: 96.9462% ( 3) 00:14:09.910 4.693 - 4.717: 97.0480% ( 13) 00:14:09.910 4.717 - 4.741: 97.1185% ( 9) 00:14:09.910 4.741 - 4.764: 97.1889% ( 9) 00:14:09.910 4.764 - 4.788: 97.2672% ( 10) 00:14:09.910 4.788 - 4.812: 97.3299% ( 8) 00:14:09.910 4.812 - 4.836: 97.4082% ( 10) 00:14:09.910 4.836 - 4.859: 97.4865% ( 10) 00:14:09.910 4.859 - 4.883: 97.5648% ( 10) 00:14:09.910 4.883 - 4.907: 97.6666% ( 13) 00:14:09.910 4.907 - 4.930: 97.7371% ( 9) 00:14:09.910 4.930 - 4.954: 97.7762% ( 5) 00:14:09.910 4.954 - 4.978: 97.8545% ( 10) 00:14:09.910 4.978 - 5.001: 97.8858% ( 4) 00:14:09.910 5.001 - 5.025: 97.9641% ( 10) 00:14:09.910 5.025 - 5.049: 98.0268% ( 8) 00:14:09.910 5.049 - 5.073: 98.0503% ( 3) 00:14:09.910 5.073 - 5.096: 98.0659% ( 2) 00:14:09.910 5.096 - 5.120: 98.1051% ( 5) 00:14:09.910 5.120 - 5.144: 98.1599% ( 7) 00:14:09.910 5.144 - 5.167: 98.1990% ( 5) 00:14:09.910 5.167 - 5.191: 98.2147% ( 2) 00:14:09.910 5.191 - 5.215: 98.2304% ( 2) 00:14:09.910 5.215 - 5.239: 98.2460% ( 2) 00:14:09.910 5.239 - 5.262: 98.2695% ( 3) 00:14:09.910 5.262 - 5.286: 98.3008% ( 4) 00:14:09.910 5.286 - 5.310: 98.3635% ( 8) 00:14:09.910 5.310 - 5.333: 98.3870% ( 3) 00:14:09.910 5.333 - 5.357: 98.4026% ( 2) 00:14:09.910 5.357 - 5.381: 98.4183% ( 2) 00:14:09.910 5.404 - 5.428: 98.4340% ( 2) 00:14:09.910 5.428 - 5.452: 98.4418% ( 1) 00:14:09.910 5.476 - 5.499: 98.4496% ( 1) 00:14:09.910 5.523 - 5.547: 98.4653% ( 2) 00:14:09.910 5.547 - 5.570: 98.4731% ( 1) 00:14:09.910 5.618 - 5.641: 98.4809% ( 1) 00:14:09.910 5.665 - 5.689: 98.4888% ( 1) 00:14:09.910 5.736 - 5.760: 98.4966% ( 1) 00:14:09.910 5.902 - 5.926: 98.5044% ( 1) 00:14:09.910 5.973 - 5.997: 98.5123% ( 1) 00:14:09.910 6.068 - 6.116: 98.5201% ( 1) 00:14:09.910 6.116 - 6.163: 98.5279% ( 1) 00:14:09.910 6.210 - 6.258: 98.5357% ( 1) 00:14:09.910 6.305 - 6.353: 98.5436% ( 1) 00:14:09.910 6.447 - 6.495: 98.5592% ( 2) 00:14:09.910 6.732 - 6.779: 98.5671% ( 1) 00:14:09.910 6.921 - 6.969: 98.5749% ( 1) 00:14:09.910 7.253 - 7.301: 98.5827% ( 1) 00:14:09.910 7.396 - 7.443: 98.5906% ( 1) 00:14:09.910 7.443 - 7.490: 98.5984% ( 1) 00:14:09.910 7.585 - 7.633: 98.6140% ( 2) 00:14:09.910 7.680 - 7.727: 98.6219% ( 1) 00:14:09.910 7.727 - 7.775: 98.6297% ( 1) 00:14:09.910 7.775 - 7.822: 98.6375% ( 1) 00:14:09.910 7.822 - 7.870: 98.6532% ( 2) 00:14:09.910 7.964 - 8.012: 98.6689% ( 2) 00:14:09.910 8.012 - 8.059: 98.6845% ( 2) 00:14:09.910 8.059 - 8.107: 98.6923% ( 1) 00:14:09.910 8.154 - 8.201: 98.7080% ( 2) 00:14:09.910 8.201 - 8.249: 98.7158% ( 1) 00:14:09.910 8.439 - 8.486: 98.7237% ( 1) 00:14:09.910 8.581 - 8.628: 98.7315% ( 1) 00:14:09.910 8.723 - 8.770: 98.7393% ( 1) 00:14:09.910 8.770 - 8.818: 98.7550% ( 2) 00:14:09.910 8.818 - 8.865: 98.7707% ( 2) 00:14:09.910 9.102 - 9.150: 98.7863% ( 2) 00:14:09.910 9.244 - 9.292: 98.7941% ( 1) 00:14:09.910 9.387 - 9.434: 98.8020% ( 1) 00:14:09.910 9.481 - 9.529: 98.8098% ( 1) 00:14:09.910 10.098 - 10.145: 98.8176% ( 1) 00:14:09.910 10.145 - 10.193: 98.8255% ( 1) 00:14:09.910 10.193 - 10.240: 98.8333% ( 1) 00:14:09.910 10.287 - 10.335: 98.8411% ( 1) 00:14:09.910 10.430 - 10.477: 98.8490% ( 1) 00:14:09.910 10.524 - 10.572: 98.8568% ( 1) 00:14:09.910 10.572 - 10.619: 98.8646% ( 1) 00:14:09.910 10.619 - 10.667: 98.8724% ( 1) 00:14:09.910 10.856 - 10.904: 98.8881% ( 2) 00:14:09.910 11.141 - 11.188: 98.8959% ( 1) 00:14:09.910 11.283 - 11.330: 98.9038% ( 1) 00:14:09.910 12.089 - 12.136: 98.9194% ( 2) 00:14:09.910 12.705 - 12.800: 98.9273% ( 1) 00:14:09.910 12.990 - 13.084: 98.9351% ( 1) 00:14:09.910 13.179 - 13.274: 98.9429% ( 1) 00:14:09.910 13.274 - 13.369: 98.9507% ( 1) 00:14:09.910 13.653 - 13.748: 98.9586% ( 1) 00:14:09.910 13.938 - 14.033: 98.9664% ( 1) 00:14:09.910 15.739 - 15.834: 98.9742% ( 1) 00:14:09.910 16.687 - 16.782: 98.9821% ( 1) 00:14:09.910 17.067 - 17.161: 98.9977% ( 2) 00:14:09.910 17.161 - 17.256: 99.0056% ( 1) 00:14:09.910 17.351 - 17.446: 99.0212% ( 2) 00:14:09.910 17.446 - 17.541: 99.0525% ( 4) 00:14:09.910 17.541 - 17.636: 99.1152% ( 8) 00:14:09.910 17.636 - 17.730: 99.1622% ( 6) 00:14:09.910 17.730 - 17.825: 99.2405% ( 10) 00:14:09.910 17.825 - 17.920: 99.3109% ( 9) 00:14:09.910 17.920 - 18.015: 99.3736% ( 8) 00:14:09.910 18.015 - 18.110: 99.4049% ( 4) 00:14:09.910 18.110 - 18.204: 99.4675% ( 8) 00:14:09.910 18.204 - 18.299: 99.4910% ( 3) 00:14:09.910 18.299 - 18.394: 99.5224% ( 4) 00:14:09.910 18.394 - 18.489: 99.5615% ( 5) 00:14:09.910 18.489 - 18.584: 99.5928% ( 4) 00:14:09.910 18.584 - 18.679: 99.6320% ( 5) 00:14:09.910 18.679 - 18.773: 99.6711% ( 5) 00:14:09.910 18.773 - 18.868: 99.6868% ( 2) 00:14:09.910 18.868 - 18.963: 99.7025% ( 2) 00:14:09.910 18.963 - 19.058: 99.7181% ( 2) 00:14:09.910 19.058 - 19.153: 99.7494% ( 4) 00:14:09.910 19.153 - 19.247: 99.7651% ( 2) 00:14:09.910 19.247 - 19.342: 99.7729% ( 1) 00:14:09.910 19.342 - 19.437: 99.7808% ( 1) 00:14:09.910 19.437 - 19.532: 99.7964% ( 2) 00:14:09.910 19.532 - 19.627: 99.8199% ( 3) 00:14:09.910 19.816 - 19.911: 99.8356% ( 2) 00:14:09.910 20.101 - 20.196: 99.8512% ( 2) 00:14:09.910 22.756 - 22.850: 99.8591% ( 1) 00:14:09.910 23.230 - 23.324: 99.8669% ( 1) 00:14:09.910 23.514 - 23.609: 99.8747% ( 1) 00:14:09.910 24.841 - 25.031: 99.8825% ( 1) 00:14:09.910 26.738 - 26.927: 99.8904% ( 1) 00:14:09.910 26.927 - 27.117: 99.8982% ( 1) 00:14:09.910 27.686 - 27.876: 99.9060% ( 1) 00:14:09.910 30.530 - 30.720: 99.9139% ( 1) 00:14:09.910 33.754 - 33.944: 99.9217% ( 1) 00:14:09.910 3980.705 - 4004.978: 99.9687% ( 6) 00:14:09.910 4004.978 - 4029.250: 99.9922% ( 3) 00:14:09.910 6990.507 - 7039.052: 100.0000% ( 1) 00:14:09.910 00:14:09.910 Complete histogram 00:14:09.910 ================== 00:14:09.910 Range in us Cumulative Count 00:14:09.910 2.062 - 2.074: 0.3837% ( 49) 00:14:09.910 2.074 - 2.086: 19.1371% ( 2395) 00:14:09.910 2.086 - 2.098: 35.5180% ( 2092) 00:14:09.910 2.098 - 2.110: 38.7910% ( 418) 00:14:09.910 2.110 - 2.121: 53.6763% ( 1901) 00:14:09.910 2.121 - 2.133: 58.2257% ( 581) 00:14:09.910 2.133 - 2.145: 61.2090% ( 381) 00:14:09.910 2.145 - 2.157: 71.9991% ( 1378) 00:14:09.910 2.157 - 2.169: 75.5148% ( 449) 00:14:09.910 2.169 - 2.181: 77.9500% ( 311) 00:14:09.910 2.181 - 2.193: 83.5095% ( 710) 00:14:09.910 2.193 - 2.204: 84.9816% ( 188) 00:14:09.910 2.204 - 2.216: 85.9525% ( 124) 00:14:09.910 2.216 - 2.228: 87.5969% ( 210) 00:14:09.910 2.228 - 2.240: 89.3979% ( 230) 00:14:09.910 2.240 - 2.252: 90.4001% ( 128) 00:14:09.910 2.252 - 2.264: 90.9561% ( 71) 00:14:09.910 2.264 - 2.276: 91.1988% ( 31) 00:14:09.910 2.276 - 2.287: 91.5198% ( 41) 00:14:09.910 2.287 - 2.299: 91.9035% ( 49) 00:14:09.910 2.299 - 2.311: 92.2794% ( 48) 00:14:09.910 2.311 - 2.323: 92.5221% ( 31) 00:14:09.910 2.323 - 2.335: 92.6787% ( 20) 00:14:09.910 2.335 - 2.347: 92.7962% ( 15) 00:14:09.910 2.347 - 2.359: 92.9293% ( 17) 00:14:09.910 2.359 - 2.370: 93.1094% ( 23) 00:14:09.910 2.370 - 2.382: 93.2973% ( 24) 00:14:09.910 2.382 - 2.394: 93.5166% ( 28) 00:14:09.910 2.394 - 2.406: 93.7280% ( 27) 00:14:09.910 2.406 - 2.418: 93.9785% ( 32) 00:14:09.910 2.418 - 2.430: 94.2291% ( 32) 00:14:09.910 2.430 - 2.441: 94.4484% ( 28) 00:14:09.910 2.441 - 2.453: 94.7772% ( 42) 00:14:09.910 2.453 - 2.465: 95.1061% ( 42) 00:14:09.910 2.465 - 2.477: 95.5133% ( 52) 00:14:09.910 2.477 - 2.489: 95.7638% ( 32) 00:14:09.910 2.489 - 2.501: 96.0536% ( 37) 00:14:09.910 2.501 - 2.513: 96.2337% ( 23) 00:14:09.910 2.513 - 2.524: 96.3589% ( 16) 00:14:09.910 2.524 - 2.536: 96.4529% ( 12) 00:14:09.911 2.536 - 2.548: 96.5860% ( 17) 00:14:09.911 2.548 - 2.560: 96.6878% ( 13) 00:14:09.911 2.560 - 2.572: 96.7896% ( 13) 00:14:09.911 2.572 - 2.584: 96.8366% ( 6) 00:14:09.911 2.584 - 2.596: 96.8992% ( 8) 00:14:09.911 2.596 - 2.607: 97.0167% ( 15) 00:14:09.911 2.607 - 2.619: 97.0480% ( 4) 00:14:09.911 2.619 - 2.631: 97.1185% ( 9) 00:14:09.911 2.631 - 2.643: 97.1733% ( 7) 00:14:09.911 2.643 - 2.655: 97.1968% ( 3) 00:14:09.911 2.655 - 2.667: 97.2203% ( 3) 00:14:09.911 2.667 - 2.679: 97.2359% ( 2) 00:14:09.911 2.679 - 2.690: 97.2829% ( 6) 00:14:09.911 2.690 - 2.702: 97.3064% ( 3) 00:14:09.911 2.702 - 2.714: 97.3690% ( 8) 00:14:09.911 2.714 - 2.726: 97.4082% ( 5) 00:14:09.911 2.726 - 2.738: 97.4473% ( 5) 00:14:09.911 2.738 - 2.750: 97.4708% ( 3) 00:14:09.911 2.750 - 2.761: 97.5100% ( 5) 00:14:09.911 2.761 - 2.773: 97.5335% ( 3) 00:14:09.911 2.773 - 2.785: 97.5570% ( 3) 00:14:09.911 2.785 - 2.797: 97.5961% ( 5) 00:14:09.911 2.797 - 2.809: 97.6118% ( 2) 00:14:09.911 2.809 - 2.821: 97.6588% ( 6) 00:14:09.911 2.821 - 2.833: 97.7214% ( 8) 00:14:09.911 2.833 - 2.844: 97.7371% ( 2) 00:14:09.911 2.844 - 2.856: 97.7527% ( 2) 00:14:09.911 2.856 - 2.868: 97.7762% ( 3) 00:14:09.911 2.868 - 2.880: 97.7840% ( 1) 00:14:09.911 2.880 - 2.892: 97.8154% ( 4) 00:14:09.911 2.892 - 2.904: 97.8623% ( 6) 00:14:09.911 2.916 - 2.927: 97.9172% ( 7) 00:14:09.911 2.927 - 2.939: 97.9328% ( 2) 00:14:09.911 2.939 - 2.951: 97.9406% ( 1) 00:14:09.911 2.951 - 2.963: 97.9641% ( 3) 00:14:09.911 2.963 - 2.975: 97.9955% ( 4) 00:14:09.911 2.975 - 2.987: 98.0033% ( 1) 00:14:09.911 2.987 - 2.999: 98.0189% ( 2) 00:14:09.911 3.022 - 3.034: 98.0424% ( 3) 00:14:09.911 3.034 - 3.058: 98.1051% ( 8) 00:14:09.911 3.058 - 3.081: 98.1521% ( 6) 00:14:09.911 3.081 - 3.105: 98.1756% ( 3) 00:14:09.911 3.105 - 3.129: 98.1990% ( 3) 00:14:09.911 3.129 - 3.153: 98.2304% ( 4) 00:14:09.911 3.153 - 3.176: 98.2382% ( 1) 00:14:09.911 3.176 - 3.200: 98.2460% ( 1) 00:14:09.911 3.200 - 3.224: 98.2617% ( 2) 00:14:09.911 3.224 - 3.247: 98.2695% ( 1) 00:14:09.911 3.247 - 3.271: 98.2773% ( 1) 00:14:09.911 3.271 - 3.295: 98.2852% ( 1) 00:14:09.911 3.319 - 3.342: 98.2930% ( 1) 00:14:09.911 3.342 - 3.366: 98.3165% ( 3) 00:14:09.911 3.366 - 3.390: 98.3400% ( 3) 00:14:09.911 3.413 - 3.437: 98.3556% ( 2) 00:14:09.911 3.437 - 3.461: 98.3791% ( 3) 00:14:09.911 3.461 - 3.484: 98.3948% ( 2) 00:14:09.911 3.484 - 3.508: 98.4183% ( 3) 00:14:09.911 3.508 - 3.532: 98.4261% ( 1) 00:14:09.911 3.532 - 3.556: 98.4418% ( 2) 00:14:09.911 3.556 - 3.579: 98.4496% ( 1) 00:14:09.911 3.579 - 3.603: 98.4574% ( 1) 00:14:09.911 3.603 - 3.627: 98.4731% ( 2) 00:14:09.911 3.627 - 3.650: 98.4888% ( 2) 00:14:09.911 3.650 - 3.674: 98.4966% ( 1) 00:14:09.911 3.674 - 3.698: 98.5044% ( 1) 00:14:09.911 3.698 - 3.721: 98.5123% ( 1) 00:14:09.911 3.721 - 3.745: 98.5357% ( 3) 00:14:09.911 3.745 - 3.769: 98.5436% ( 1) 00:14:09.911 3.769 - 3.793: 98.5514% ( 1) 00:14:09.911 3.793 - 3.816: 98.5671% ( 2) 00:14:09.911 3.816 - 3.840: 98.5749% ( 1) 00:14:09.911 3.959 - 3.982: 98.5827% ( 1) 00:14:09.911 3.982 - 4.006: 98.5906% ( 1) 00:14:09.911 4.101 - 4.124: 98.5984% ( 1) 00:14:09.911 4.148 - 4.172: 98.6062% ( 1) 00:14:09.911 5.001 - 5.025: 98.6140% ( 1) 00:14:09.911 5.665 - 5.689: 98.6219% ( 1) 00:14:09.911 6.068 - 6.116: 98.6297% ( 1) 00:14:09.911 6.353 - 6.400: 98.6454% ( 2) 00:14:09.911 6.400 - 6.447: 98.6532% ( 1) 00:14:09.911 6.542 - 6.590: 98.6610% ( 1) 00:14:09.911 6.590 - 6.637: 98.6689% ( 1) 00:14:09.911 6.684 - 6.732: 98.6767% ( 1) 00:14:09.911 6.874 - 6.921: 98.6923% ( 2) 00:14:09.911 6.921 - 6.969: 98.7002% ( 1) 00:14:09.911 7.064 - 7.111: 98.7080% ( 1) 00:14:09.911 7.206 - 7.253: 98.7158% ( 1) 00:14:09.911 7.396 - 7.443: 98.7237% ( 1) 00:14:09.911 7.443 - 7.490: 98.7315% ( 1) 00:14:09.911 7.822 - 7.870: 98.7393% ( 1) 00:14:09.911 8.059 - 8.107: 98.7472% ( 1) 00:14:09.911 8.107 - 8.154: 98.7550% ( 1) 00:14:09.911 8.581 - 8.628: 98.7628% ( 1) 00:14:09.911 9.055 - 9.102: 98.7707% ( 1) 00:14:09.911 9.292 - 9.339: 98.7785% ( 1) 00:14:09.911 10.856 - 10.904: 98.7863% ( 1) 00:14:09.911 11.046 - 11.093: 98.7941% ( 1) 00:14:09.911 14.033 - 14.127: 98.8020% ( 1) 00:14:09.911 14.601 - 14.696: 98.8098% ( 1) 00:14:09.911 15.265 - 15.360: 98.8176% ( 1) 00:14:09.911 15.455 - 15.550: 98.8255% ( 1) 00:14:09.911 15.550 - 15.644: 98.8333% ( 1) 00:14:09.911 15.644 - 15.739: 98.8411% ( 1) 00:14:09.911 15.739 - 15.834: 98.8568% ( 2) 00:14:09.911 15.834 - 15.929: 98.8803% ( 3) 00:14:09.911 15.929 - 16.024: 98.9194% ( 5) 00:14:09.911 16.024 - 16.119: 98.9507% ( 4) 00:14:09.911 16.119 - 16.213: 98.9586% ( 1) 00:14:09.911 16.213 - 16.308: 98.9664% ( 1) 00:14:09.911 16.308 - 16.403: 98.9977% ( 4) 00:14:09.911 16.403 - 16.498: 99.0056% ( 1) 00:14:09.911 16.498 - 16.593: 99.0604% ( 7) 00:14:09.911 16.593 - 16.687: 99.1152% ( 7) 00:14:09.911 16.687 - 16.782: 99.1230% ( 1) 00:14:09.911 16.782 - 16.877: 99.1543% ( 4) 00:14:09.911 16.877 - 16.972: 99.1700% ( 2) 00:14:09.911 16.972 - 17.067: 99.2013% ( 4) 00:14:09.911 17.067 - 17.161: 99.2091% ( 1) 00:14:09.911 17.161 - 17.256: 99.2170% ( 1) 00:14:09.911 17.256 - 17.351: 99.2405%[2024-12-10 22:45:17.607069] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.169 ( 3) 00:14:10.169 17.351 - 17.446: 99.2561% ( 2) 00:14:10.169 17.446 - 17.541: 99.2640% ( 1) 00:14:10.169 17.541 - 17.636: 99.2718% ( 1) 00:14:10.169 17.730 - 17.825: 99.2874% ( 2) 00:14:10.169 17.825 - 17.920: 99.3109% ( 3) 00:14:10.169 18.015 - 18.110: 99.3266% ( 2) 00:14:10.169 18.204 - 18.299: 99.3344% ( 1) 00:14:10.169 18.299 - 18.394: 99.3501% ( 2) 00:14:10.169 18.489 - 18.584: 99.3579% ( 1) 00:14:10.170 18.584 - 18.679: 99.3658% ( 1) 00:14:10.170 20.575 - 20.670: 99.3736% ( 1) 00:14:10.170 21.807 - 21.902: 99.3814% ( 1) 00:14:10.170 23.324 - 23.419: 99.3971% ( 2) 00:14:10.170 1043.721 - 1049.790: 99.4049% ( 1) 00:14:10.170 3980.705 - 4004.978: 99.7181% ( 40) 00:14:10.170 4004.978 - 4029.250: 100.0000% ( 36) 00:14:10.170 00:14:10.170 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:10.170 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:10.170 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:10.170 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:10.170 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.428 [ 00:14:10.428 { 00:14:10.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:10.428 "subtype": "Discovery", 00:14:10.428 "listen_addresses": [], 00:14:10.428 "allow_any_host": true, 00:14:10.428 "hosts": [] 00:14:10.428 }, 00:14:10.428 { 00:14:10.428 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:10.428 "subtype": "NVMe", 00:14:10.428 "listen_addresses": [ 00:14:10.428 { 00:14:10.428 "trtype": "VFIOUSER", 00:14:10.428 "adrfam": "IPv4", 00:14:10.428 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:10.428 "trsvcid": "0" 00:14:10.428 } 00:14:10.428 ], 00:14:10.428 "allow_any_host": true, 00:14:10.428 "hosts": [], 00:14:10.428 "serial_number": "SPDK1", 00:14:10.428 "model_number": "SPDK bdev Controller", 00:14:10.428 "max_namespaces": 32, 00:14:10.428 "min_cntlid": 1, 00:14:10.428 "max_cntlid": 65519, 00:14:10.428 "namespaces": [ 00:14:10.428 { 00:14:10.428 "nsid": 1, 00:14:10.428 "bdev_name": "Malloc1", 00:14:10.428 "name": "Malloc1", 00:14:10.428 "nguid": "58187BAD49984DB8ADF2527C733D4387", 00:14:10.428 "uuid": "58187bad-4998-4db8-adf2-527c733d4387" 00:14:10.428 } 00:14:10.428 ] 00:14:10.428 }, 00:14:10.428 { 00:14:10.428 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:10.428 "subtype": "NVMe", 00:14:10.428 "listen_addresses": [ 00:14:10.428 { 00:14:10.428 "trtype": "VFIOUSER", 00:14:10.428 "adrfam": "IPv4", 00:14:10.428 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:10.428 "trsvcid": "0" 00:14:10.428 } 00:14:10.428 ], 00:14:10.428 "allow_any_host": true, 00:14:10.428 "hosts": [], 00:14:10.428 "serial_number": "SPDK2", 00:14:10.428 "model_number": "SPDK bdev Controller", 00:14:10.428 "max_namespaces": 32, 00:14:10.428 "min_cntlid": 1, 00:14:10.428 "max_cntlid": 65519, 00:14:10.428 "namespaces": [ 00:14:10.428 { 00:14:10.428 "nsid": 1, 00:14:10.428 "bdev_name": "Malloc2", 00:14:10.428 "name": "Malloc2", 00:14:10.428 "nguid": "D2FF75A2BDAB408F99A6FF72FEFF6705", 00:14:10.428 "uuid": "d2ff75a2-bdab-408f-99a6-ff72feff6705" 00:14:10.428 } 00:14:10.428 ] 00:14:10.428 } 00:14:10.428 ] 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=43068 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:10.428 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:10.686 [2024-12-10 22:45:18.160057] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.686 Malloc3 00:14:10.687 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:10.944 [2024-12-10 22:45:18.562069] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.944 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.944 Asynchronous Event Request test 00:14:10.944 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.944 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.944 Registering asynchronous event callbacks... 00:14:10.944 Starting namespace attribute notice tests for all controllers... 00:14:10.944 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:10.944 aer_cb - Changed Namespace 00:14:10.944 Cleaning up... 00:14:11.204 [ 00:14:11.204 { 00:14:11.204 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:11.204 "subtype": "Discovery", 00:14:11.204 "listen_addresses": [], 00:14:11.204 "allow_any_host": true, 00:14:11.204 "hosts": [] 00:14:11.204 }, 00:14:11.204 { 00:14:11.204 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:11.204 "subtype": "NVMe", 00:14:11.204 "listen_addresses": [ 00:14:11.204 { 00:14:11.204 "trtype": "VFIOUSER", 00:14:11.204 "adrfam": "IPv4", 00:14:11.204 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:11.204 "trsvcid": "0" 00:14:11.205 } 00:14:11.205 ], 00:14:11.205 "allow_any_host": true, 00:14:11.205 "hosts": [], 00:14:11.205 "serial_number": "SPDK1", 00:14:11.205 "model_number": "SPDK bdev Controller", 00:14:11.205 "max_namespaces": 32, 00:14:11.205 "min_cntlid": 1, 00:14:11.205 "max_cntlid": 65519, 00:14:11.205 "namespaces": [ 00:14:11.205 { 00:14:11.205 "nsid": 1, 00:14:11.205 "bdev_name": "Malloc1", 00:14:11.205 "name": "Malloc1", 00:14:11.205 "nguid": "58187BAD49984DB8ADF2527C733D4387", 00:14:11.205 "uuid": "58187bad-4998-4db8-adf2-527c733d4387" 00:14:11.205 }, 00:14:11.205 { 00:14:11.205 "nsid": 2, 00:14:11.205 "bdev_name": "Malloc3", 00:14:11.205 "name": "Malloc3", 00:14:11.205 "nguid": "7407FC18B62841B180AF199873156F2F", 00:14:11.205 "uuid": "7407fc18-b628-41b1-80af-199873156f2f" 00:14:11.205 } 00:14:11.205 ] 00:14:11.205 }, 00:14:11.205 { 00:14:11.205 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:11.205 "subtype": "NVMe", 00:14:11.205 "listen_addresses": [ 00:14:11.205 { 00:14:11.205 "trtype": "VFIOUSER", 00:14:11.205 "adrfam": "IPv4", 00:14:11.205 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:11.205 "trsvcid": "0" 00:14:11.205 } 00:14:11.205 ], 00:14:11.205 "allow_any_host": true, 00:14:11.205 "hosts": [], 00:14:11.205 "serial_number": "SPDK2", 00:14:11.205 "model_number": "SPDK bdev Controller", 00:14:11.205 "max_namespaces": 32, 00:14:11.205 "min_cntlid": 1, 00:14:11.205 "max_cntlid": 65519, 00:14:11.205 "namespaces": [ 00:14:11.205 { 00:14:11.205 "nsid": 1, 00:14:11.205 "bdev_name": "Malloc2", 00:14:11.205 "name": "Malloc2", 00:14:11.205 "nguid": "D2FF75A2BDAB408F99A6FF72FEFF6705", 00:14:11.205 "uuid": "d2ff75a2-bdab-408f-99a6-ff72feff6705" 00:14:11.205 } 00:14:11.205 ] 00:14:11.205 } 00:14:11.205 ] 00:14:11.205 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 43068 00:14:11.205 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:11.205 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:11.205 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:11.205 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:11.205 [2024-12-10 22:45:18.862946] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:14:11.205 [2024-12-10 22:45:18.862986] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43205 ] 00:14:11.205 [2024-12-10 22:45:18.910634] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:11.205 [2024-12-10 22:45:18.919850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:11.205 [2024-12-10 22:45:18.919883] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdcc52d6000 00:14:11.205 [2024-12-10 22:45:18.920837] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.921861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.922868] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.923877] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.924888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.925896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.926920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.927913] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.205 [2024-12-10 22:45:18.928929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:11.205 [2024-12-10 22:45:18.928951] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdcc52cb000 00:14:11.205 [2024-12-10 22:45:18.930095] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:11.465 [2024-12-10 22:45:18.948850] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:11.465 [2024-12-10 22:45:18.948891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:11.465 [2024-12-10 22:45:18.950972] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:11.465 [2024-12-10 22:45:18.951029] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:11.465 [2024-12-10 22:45:18.951125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:11.465 [2024-12-10 22:45:18.951152] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:11.465 [2024-12-10 22:45:18.951163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:11.465 [2024-12-10 22:45:18.951980] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:11.465 [2024-12-10 22:45:18.952002] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:11.465 [2024-12-10 22:45:18.952015] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:11.465 [2024-12-10 22:45:18.952984] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:11.465 [2024-12-10 22:45:18.953005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:11.465 [2024-12-10 22:45:18.953019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:11.465 [2024-12-10 22:45:18.953995] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:11.465 [2024-12-10 22:45:18.954016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:11.465 [2024-12-10 22:45:18.954997] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:11.465 [2024-12-10 22:45:18.955018] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:11.465 [2024-12-10 22:45:18.955027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:11.465 [2024-12-10 22:45:18.955038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:11.465 [2024-12-10 22:45:18.955148] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:11.465 [2024-12-10 22:45:18.955156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:11.465 [2024-12-10 22:45:18.955164] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:11.466 [2024-12-10 22:45:18.956003] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:11.466 [2024-12-10 22:45:18.957006] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:11.466 [2024-12-10 22:45:18.958015] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:11.466 [2024-12-10 22:45:18.959012] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.466 [2024-12-10 22:45:18.959094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:11.466 [2024-12-10 22:45:18.960030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:11.466 [2024-12-10 22:45:18.960051] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:11.466 [2024-12-10 22:45:18.960060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.960084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:11.466 [2024-12-10 22:45:18.960101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.960124] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:11.466 [2024-12-10 22:45:18.960134] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.466 [2024-12-10 22:45:18.960141] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.466 [2024-12-10 22:45:18.960160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:18.966563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:18.966590] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:11.466 [2024-12-10 22:45:18.966600] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:11.466 [2024-12-10 22:45:18.966607] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:11.466 [2024-12-10 22:45:18.966616] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:11.466 [2024-12-10 22:45:18.966624] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:11.466 [2024-12-10 22:45:18.966632] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:11.466 [2024-12-10 22:45:18.966640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.966658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.966679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:18.974555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:18.974580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.466 [2024-12-10 22:45:18.974594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.466 [2024-12-10 22:45:18.974606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.466 [2024-12-10 22:45:18.974618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.466 [2024-12-10 22:45:18.974627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.974644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.974664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:18.982555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:18.982575] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:11.466 [2024-12-10 22:45:18.982584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.982597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.982607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.982620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:18.990556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:18.990631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.990654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.990670] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:11.466 [2024-12-10 22:45:18.990678] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:11.466 [2024-12-10 22:45:18.990684] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.466 [2024-12-10 22:45:18.990694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:18.998556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:18.998581] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:11.466 [2024-12-10 22:45:18.998598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.998613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:18.998626] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:11.466 [2024-12-10 22:45:18.998634] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.466 [2024-12-10 22:45:18.998640] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.466 [2024-12-10 22:45:18.998650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:19.006571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:19.006610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.006629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.006647] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:11.466 [2024-12-10 22:45:19.006656] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.466 [2024-12-10 22:45:19.006662] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.466 [2024-12-10 22:45:19.006672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:19.014567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:19.014591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.014605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.014621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.014633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.014641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.014650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.014659] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:11.466 [2024-12-10 22:45:19.014667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:11.466 [2024-12-10 22:45:19.014676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:11.466 [2024-12-10 22:45:19.014700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:19.022558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:19.022583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:19.030555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:19.030580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:19.038569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:19.038595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:11.466 [2024-12-10 22:45:19.046568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:11.466 [2024-12-10 22:45:19.046601] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:11.466 [2024-12-10 22:45:19.046613] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:11.467 [2024-12-10 22:45:19.046619] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:11.467 [2024-12-10 22:45:19.046625] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:11.467 [2024-12-10 22:45:19.046634] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:11.467 [2024-12-10 22:45:19.046645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:11.467 [2024-12-10 22:45:19.046656] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:11.467 [2024-12-10 22:45:19.046665] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:11.467 [2024-12-10 22:45:19.046671] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.467 [2024-12-10 22:45:19.046679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:11.467 [2024-12-10 22:45:19.046690] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:11.467 [2024-12-10 22:45:19.046698] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.467 [2024-12-10 22:45:19.046704] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.467 [2024-12-10 22:45:19.046713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.467 [2024-12-10 22:45:19.046725] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:11.467 [2024-12-10 22:45:19.046733] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:11.467 [2024-12-10 22:45:19.046739] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.467 [2024-12-10 22:45:19.046748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:11.467 [2024-12-10 22:45:19.054573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:11.467 [2024-12-10 22:45:19.054601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:11.467 [2024-12-10 22:45:19.054619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:11.467 [2024-12-10 22:45:19.054632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:11.467 ===================================================== 00:14:11.467 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:11.467 ===================================================== 00:14:11.467 Controller Capabilities/Features 00:14:11.467 ================================ 00:14:11.467 Vendor ID: 4e58 00:14:11.467 Subsystem Vendor ID: 4e58 00:14:11.467 Serial Number: SPDK2 00:14:11.467 Model Number: SPDK bdev Controller 00:14:11.467 Firmware Version: 25.01 00:14:11.467 Recommended Arb Burst: 6 00:14:11.467 IEEE OUI Identifier: 8d 6b 50 00:14:11.467 Multi-path I/O 00:14:11.467 May have multiple subsystem ports: Yes 00:14:11.467 May have multiple controllers: Yes 00:14:11.467 Associated with SR-IOV VF: No 00:14:11.467 Max Data Transfer Size: 131072 00:14:11.467 Max Number of Namespaces: 32 00:14:11.467 Max Number of I/O Queues: 127 00:14:11.467 NVMe Specification Version (VS): 1.3 00:14:11.467 NVMe Specification Version (Identify): 1.3 00:14:11.467 Maximum Queue Entries: 256 00:14:11.467 Contiguous Queues Required: Yes 00:14:11.467 Arbitration Mechanisms Supported 00:14:11.467 Weighted Round Robin: Not Supported 00:14:11.467 Vendor Specific: Not Supported 00:14:11.467 Reset Timeout: 15000 ms 00:14:11.467 Doorbell Stride: 4 bytes 00:14:11.467 NVM Subsystem Reset: Not Supported 00:14:11.467 Command Sets Supported 00:14:11.467 NVM Command Set: Supported 00:14:11.467 Boot Partition: Not Supported 00:14:11.467 Memory Page Size Minimum: 4096 bytes 00:14:11.467 Memory Page Size Maximum: 4096 bytes 00:14:11.467 Persistent Memory Region: Not Supported 00:14:11.467 Optional Asynchronous Events Supported 00:14:11.467 Namespace Attribute Notices: Supported 00:14:11.467 Firmware Activation Notices: Not Supported 00:14:11.467 ANA Change Notices: Not Supported 00:14:11.467 PLE Aggregate Log Change Notices: Not Supported 00:14:11.467 LBA Status Info Alert Notices: Not Supported 00:14:11.467 EGE Aggregate Log Change Notices: Not Supported 00:14:11.467 Normal NVM Subsystem Shutdown event: Not Supported 00:14:11.467 Zone Descriptor Change Notices: Not Supported 00:14:11.467 Discovery Log Change Notices: Not Supported 00:14:11.467 Controller Attributes 00:14:11.467 128-bit Host Identifier: Supported 00:14:11.467 Non-Operational Permissive Mode: Not Supported 00:14:11.467 NVM Sets: Not Supported 00:14:11.467 Read Recovery Levels: Not Supported 00:14:11.467 Endurance Groups: Not Supported 00:14:11.467 Predictable Latency Mode: Not Supported 00:14:11.467 Traffic Based Keep ALive: Not Supported 00:14:11.467 Namespace Granularity: Not Supported 00:14:11.467 SQ Associations: Not Supported 00:14:11.467 UUID List: Not Supported 00:14:11.467 Multi-Domain Subsystem: Not Supported 00:14:11.467 Fixed Capacity Management: Not Supported 00:14:11.467 Variable Capacity Management: Not Supported 00:14:11.467 Delete Endurance Group: Not Supported 00:14:11.467 Delete NVM Set: Not Supported 00:14:11.467 Extended LBA Formats Supported: Not Supported 00:14:11.467 Flexible Data Placement Supported: Not Supported 00:14:11.467 00:14:11.467 Controller Memory Buffer Support 00:14:11.467 ================================ 00:14:11.467 Supported: No 00:14:11.467 00:14:11.467 Persistent Memory Region Support 00:14:11.467 ================================ 00:14:11.467 Supported: No 00:14:11.467 00:14:11.467 Admin Command Set Attributes 00:14:11.467 ============================ 00:14:11.467 Security Send/Receive: Not Supported 00:14:11.467 Format NVM: Not Supported 00:14:11.467 Firmware Activate/Download: Not Supported 00:14:11.467 Namespace Management: Not Supported 00:14:11.467 Device Self-Test: Not Supported 00:14:11.467 Directives: Not Supported 00:14:11.467 NVMe-MI: Not Supported 00:14:11.467 Virtualization Management: Not Supported 00:14:11.467 Doorbell Buffer Config: Not Supported 00:14:11.467 Get LBA Status Capability: Not Supported 00:14:11.467 Command & Feature Lockdown Capability: Not Supported 00:14:11.467 Abort Command Limit: 4 00:14:11.467 Async Event Request Limit: 4 00:14:11.467 Number of Firmware Slots: N/A 00:14:11.467 Firmware Slot 1 Read-Only: N/A 00:14:11.467 Firmware Activation Without Reset: N/A 00:14:11.467 Multiple Update Detection Support: N/A 00:14:11.467 Firmware Update Granularity: No Information Provided 00:14:11.467 Per-Namespace SMART Log: No 00:14:11.467 Asymmetric Namespace Access Log Page: Not Supported 00:14:11.467 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:11.467 Command Effects Log Page: Supported 00:14:11.467 Get Log Page Extended Data: Supported 00:14:11.467 Telemetry Log Pages: Not Supported 00:14:11.467 Persistent Event Log Pages: Not Supported 00:14:11.467 Supported Log Pages Log Page: May Support 00:14:11.467 Commands Supported & Effects Log Page: Not Supported 00:14:11.467 Feature Identifiers & Effects Log Page:May Support 00:14:11.467 NVMe-MI Commands & Effects Log Page: May Support 00:14:11.467 Data Area 4 for Telemetry Log: Not Supported 00:14:11.467 Error Log Page Entries Supported: 128 00:14:11.467 Keep Alive: Supported 00:14:11.467 Keep Alive Granularity: 10000 ms 00:14:11.467 00:14:11.467 NVM Command Set Attributes 00:14:11.467 ========================== 00:14:11.467 Submission Queue Entry Size 00:14:11.467 Max: 64 00:14:11.467 Min: 64 00:14:11.467 Completion Queue Entry Size 00:14:11.467 Max: 16 00:14:11.467 Min: 16 00:14:11.467 Number of Namespaces: 32 00:14:11.467 Compare Command: Supported 00:14:11.467 Write Uncorrectable Command: Not Supported 00:14:11.467 Dataset Management Command: Supported 00:14:11.467 Write Zeroes Command: Supported 00:14:11.467 Set Features Save Field: Not Supported 00:14:11.467 Reservations: Not Supported 00:14:11.467 Timestamp: Not Supported 00:14:11.467 Copy: Supported 00:14:11.467 Volatile Write Cache: Present 00:14:11.467 Atomic Write Unit (Normal): 1 00:14:11.467 Atomic Write Unit (PFail): 1 00:14:11.467 Atomic Compare & Write Unit: 1 00:14:11.467 Fused Compare & Write: Supported 00:14:11.467 Scatter-Gather List 00:14:11.467 SGL Command Set: Supported (Dword aligned) 00:14:11.467 SGL Keyed: Not Supported 00:14:11.467 SGL Bit Bucket Descriptor: Not Supported 00:14:11.467 SGL Metadata Pointer: Not Supported 00:14:11.467 Oversized SGL: Not Supported 00:14:11.467 SGL Metadata Address: Not Supported 00:14:11.467 SGL Offset: Not Supported 00:14:11.467 Transport SGL Data Block: Not Supported 00:14:11.467 Replay Protected Memory Block: Not Supported 00:14:11.467 00:14:11.467 Firmware Slot Information 00:14:11.467 ========================= 00:14:11.467 Active slot: 1 00:14:11.467 Slot 1 Firmware Revision: 25.01 00:14:11.467 00:14:11.467 00:14:11.467 Commands Supported and Effects 00:14:11.467 ============================== 00:14:11.467 Admin Commands 00:14:11.467 -------------- 00:14:11.467 Get Log Page (02h): Supported 00:14:11.467 Identify (06h): Supported 00:14:11.467 Abort (08h): Supported 00:14:11.467 Set Features (09h): Supported 00:14:11.467 Get Features (0Ah): Supported 00:14:11.467 Asynchronous Event Request (0Ch): Supported 00:14:11.467 Keep Alive (18h): Supported 00:14:11.467 I/O Commands 00:14:11.467 ------------ 00:14:11.467 Flush (00h): Supported LBA-Change 00:14:11.467 Write (01h): Supported LBA-Change 00:14:11.468 Read (02h): Supported 00:14:11.468 Compare (05h): Supported 00:14:11.468 Write Zeroes (08h): Supported LBA-Change 00:14:11.468 Dataset Management (09h): Supported LBA-Change 00:14:11.468 Copy (19h): Supported LBA-Change 00:14:11.468 00:14:11.468 Error Log 00:14:11.468 ========= 00:14:11.468 00:14:11.468 Arbitration 00:14:11.468 =========== 00:14:11.468 Arbitration Burst: 1 00:14:11.468 00:14:11.468 Power Management 00:14:11.468 ================ 00:14:11.468 Number of Power States: 1 00:14:11.468 Current Power State: Power State #0 00:14:11.468 Power State #0: 00:14:11.468 Max Power: 0.00 W 00:14:11.468 Non-Operational State: Operational 00:14:11.468 Entry Latency: Not Reported 00:14:11.468 Exit Latency: Not Reported 00:14:11.468 Relative Read Throughput: 0 00:14:11.468 Relative Read Latency: 0 00:14:11.468 Relative Write Throughput: 0 00:14:11.468 Relative Write Latency: 0 00:14:11.468 Idle Power: Not Reported 00:14:11.468 Active Power: Not Reported 00:14:11.468 Non-Operational Permissive Mode: Not Supported 00:14:11.468 00:14:11.468 Health Information 00:14:11.468 ================== 00:14:11.468 Critical Warnings: 00:14:11.468 Available Spare Space: OK 00:14:11.468 Temperature: OK 00:14:11.468 Device Reliability: OK 00:14:11.468 Read Only: No 00:14:11.468 Volatile Memory Backup: OK 00:14:11.468 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:11.468 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:11.468 Available Spare: 0% 00:14:11.468 Available Sp[2024-12-10 22:45:19.054747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:11.468 [2024-12-10 22:45:19.062555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:11.468 [2024-12-10 22:45:19.062605] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:11.468 [2024-12-10 22:45:19.062624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.468 [2024-12-10 22:45:19.062636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.468 [2024-12-10 22:45:19.062646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.468 [2024-12-10 22:45:19.062656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.468 [2024-12-10 22:45:19.062724] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:11.468 [2024-12-10 22:45:19.062745] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:11.468 [2024-12-10 22:45:19.063726] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:11.468 [2024-12-10 22:45:19.063805] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:11.468 [2024-12-10 22:45:19.063821] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:11.468 [2024-12-10 22:45:19.064729] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:11.468 [2024-12-10 22:45:19.064755] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:11.468 [2024-12-10 22:45:19.064813] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:11.468 [2024-12-10 22:45:19.065983] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:11.468 are Threshold: 0% 00:14:11.468 Life Percentage Used: 0% 00:14:11.468 Data Units Read: 0 00:14:11.468 Data Units Written: 0 00:14:11.468 Host Read Commands: 0 00:14:11.468 Host Write Commands: 0 00:14:11.468 Controller Busy Time: 0 minutes 00:14:11.468 Power Cycles: 0 00:14:11.468 Power On Hours: 0 hours 00:14:11.468 Unsafe Shutdowns: 0 00:14:11.468 Unrecoverable Media Errors: 0 00:14:11.468 Lifetime Error Log Entries: 0 00:14:11.468 Warning Temperature Time: 0 minutes 00:14:11.468 Critical Temperature Time: 0 minutes 00:14:11.468 00:14:11.468 Number of Queues 00:14:11.468 ================ 00:14:11.468 Number of I/O Submission Queues: 127 00:14:11.468 Number of I/O Completion Queues: 127 00:14:11.468 00:14:11.468 Active Namespaces 00:14:11.468 ================= 00:14:11.468 Namespace ID:1 00:14:11.468 Error Recovery Timeout: Unlimited 00:14:11.468 Command Set Identifier: NVM (00h) 00:14:11.468 Deallocate: Supported 00:14:11.468 Deallocated/Unwritten Error: Not Supported 00:14:11.468 Deallocated Read Value: Unknown 00:14:11.468 Deallocate in Write Zeroes: Not Supported 00:14:11.468 Deallocated Guard Field: 0xFFFF 00:14:11.468 Flush: Supported 00:14:11.468 Reservation: Supported 00:14:11.468 Namespace Sharing Capabilities: Multiple Controllers 00:14:11.468 Size (in LBAs): 131072 (0GiB) 00:14:11.468 Capacity (in LBAs): 131072 (0GiB) 00:14:11.468 Utilization (in LBAs): 131072 (0GiB) 00:14:11.468 NGUID: D2FF75A2BDAB408F99A6FF72FEFF6705 00:14:11.468 UUID: d2ff75a2-bdab-408f-99a6-ff72feff6705 00:14:11.468 Thin Provisioning: Not Supported 00:14:11.468 Per-NS Atomic Units: Yes 00:14:11.468 Atomic Boundary Size (Normal): 0 00:14:11.468 Atomic Boundary Size (PFail): 0 00:14:11.468 Atomic Boundary Offset: 0 00:14:11.468 Maximum Single Source Range Length: 65535 00:14:11.468 Maximum Copy Length: 65535 00:14:11.468 Maximum Source Range Count: 1 00:14:11.468 NGUID/EUI64 Never Reused: No 00:14:11.468 Namespace Write Protected: No 00:14:11.468 Number of LBA Formats: 1 00:14:11.468 Current LBA Format: LBA Format #00 00:14:11.468 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:11.468 00:14:11.468 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:11.727 [2024-12-10 22:45:19.315375] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:17.000 Initializing NVMe Controllers 00:14:17.000 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:17.000 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:17.000 Initialization complete. Launching workers. 00:14:17.000 ======================================================== 00:14:17.000 Latency(us) 00:14:17.000 Device Information : IOPS MiB/s Average min max 00:14:17.000 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31218.98 121.95 4099.39 1230.21 10583.36 00:14:17.000 ======================================================== 00:14:17.000 Total : 31218.98 121.95 4099.39 1230.21 10583.36 00:14:17.000 00:14:17.000 [2024-12-10 22:45:24.424957] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:17.000 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:17.000 [2024-12-10 22:45:24.688664] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.272 Initializing NVMe Controllers 00:14:22.272 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:22.272 Initialization complete. Launching workers. 00:14:22.272 ======================================================== 00:14:22.272 Latency(us) 00:14:22.272 Device Information : IOPS MiB/s Average min max 00:14:22.272 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29625.60 115.72 4321.99 1256.21 8421.96 00:14:22.272 ======================================================== 00:14:22.272 Total : 29625.60 115.72 4321.99 1256.21 8421.96 00:14:22.272 00:14:22.272 [2024-12-10 22:45:29.709434] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.272 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:22.272 [2024-12-10 22:45:29.930345] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.547 [2024-12-10 22:45:35.064711] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.547 Initializing NVMe Controllers 00:14:27.547 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:27.547 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:27.547 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:27.547 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:27.547 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:27.547 Initialization complete. Launching workers. 00:14:27.547 Starting thread on core 2 00:14:27.547 Starting thread on core 3 00:14:27.547 Starting thread on core 1 00:14:27.547 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:27.804 [2024-12-10 22:45:35.398999] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:31.096 [2024-12-10 22:45:38.474053] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:31.096 Initializing NVMe Controllers 00:14:31.096 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:31.096 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:31.096 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:31.096 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:31.096 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:31.096 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:31.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:31.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:31.096 Initialization complete. Launching workers. 00:14:31.096 Starting thread on core 1 with urgent priority queue 00:14:31.096 Starting thread on core 2 with urgent priority queue 00:14:31.096 Starting thread on core 3 with urgent priority queue 00:14:31.096 Starting thread on core 0 with urgent priority queue 00:14:31.096 SPDK bdev Controller (SPDK2 ) core 0: 4323.33 IO/s 23.13 secs/100000 ios 00:14:31.096 SPDK bdev Controller (SPDK2 ) core 1: 5639.33 IO/s 17.73 secs/100000 ios 00:14:31.096 SPDK bdev Controller (SPDK2 ) core 2: 5640.00 IO/s 17.73 secs/100000 ios 00:14:31.096 SPDK bdev Controller (SPDK2 ) core 3: 5964.33 IO/s 16.77 secs/100000 ios 00:14:31.096 ======================================================== 00:14:31.096 00:14:31.096 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:31.096 [2024-12-10 22:45:38.786180] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:31.096 Initializing NVMe Controllers 00:14:31.096 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:31.096 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:31.096 Namespace ID: 1 size: 0GB 00:14:31.096 Initialization complete. 00:14:31.096 INFO: using host memory buffer for IO 00:14:31.096 Hello world! 00:14:31.096 [2024-12-10 22:45:38.799404] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:31.355 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:31.615 [2024-12-10 22:45:39.101153] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:32.551 Initializing NVMe Controllers 00:14:32.551 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:32.551 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:32.551 Initialization complete. Launching workers. 00:14:32.551 submit (in ns) avg, min, max = 8440.1, 3502.2, 4019168.9 00:14:32.551 complete (in ns) avg, min, max = 28490.5, 2061.1, 4029133.3 00:14:32.551 00:14:32.551 Submit histogram 00:14:32.551 ================ 00:14:32.551 Range in us Cumulative Count 00:14:32.551 3.484 - 3.508: 0.0639% ( 8) 00:14:32.551 3.508 - 3.532: 0.7349% ( 84) 00:14:32.551 3.532 - 3.556: 2.2767% ( 193) 00:14:32.551 3.556 - 3.579: 5.3363% ( 383) 00:14:32.551 3.579 - 3.603: 10.8004% ( 684) 00:14:32.551 3.603 - 3.627: 18.6531% ( 983) 00:14:32.551 3.627 - 3.650: 25.9147% ( 909) 00:14:32.551 3.650 - 3.674: 32.9046% ( 875) 00:14:32.551 3.674 - 3.698: 39.7907% ( 862) 00:14:32.551 3.698 - 3.721: 46.9883% ( 901) 00:14:32.551 3.721 - 3.745: 52.3406% ( 670) 00:14:32.551 3.745 - 3.769: 57.0618% ( 591) 00:14:32.551 3.769 - 3.793: 60.7286% ( 459) 00:14:32.551 3.793 - 3.816: 64.7468% ( 503) 00:14:32.551 3.816 - 3.840: 68.5333% ( 474) 00:14:32.551 3.840 - 3.864: 73.1347% ( 576) 00:14:32.552 3.864 - 3.887: 77.1609% ( 504) 00:14:32.552 3.887 - 3.911: 80.4042% ( 406) 00:14:32.552 3.911 - 3.935: 83.5517% ( 394) 00:14:32.552 3.935 - 3.959: 85.8843% ( 292) 00:14:32.552 3.959 - 3.982: 87.6578% ( 222) 00:14:32.552 3.982 - 4.006: 89.1996% ( 193) 00:14:32.552 4.006 - 4.030: 90.5416% ( 168) 00:14:32.552 4.030 - 4.053: 91.7159% ( 147) 00:14:32.552 4.053 - 4.077: 92.8743% ( 145) 00:14:32.552 4.077 - 4.101: 93.8888% ( 127) 00:14:32.552 4.101 - 4.124: 94.7675% ( 110) 00:14:32.552 4.124 - 4.148: 95.4785% ( 89) 00:14:32.552 4.148 - 4.172: 95.8460% ( 46) 00:14:32.552 4.172 - 4.196: 96.0936% ( 31) 00:14:32.552 4.196 - 4.219: 96.3253% ( 29) 00:14:32.552 4.219 - 4.243: 96.4851% ( 20) 00:14:32.552 4.243 - 4.267: 96.6608% ( 22) 00:14:32.552 4.267 - 4.290: 96.8286% ( 21) 00:14:32.552 4.290 - 4.314: 96.8845% ( 7) 00:14:32.552 4.314 - 4.338: 97.0123% ( 16) 00:14:32.552 4.338 - 4.361: 97.0602% ( 6) 00:14:32.552 4.361 - 4.385: 97.1401% ( 10) 00:14:32.552 4.385 - 4.409: 97.1641% ( 3) 00:14:32.552 4.409 - 4.433: 97.1960% ( 4) 00:14:32.552 4.433 - 4.456: 97.2599% ( 8) 00:14:32.552 4.456 - 4.480: 97.2919% ( 4) 00:14:32.552 4.480 - 4.504: 97.3159% ( 3) 00:14:32.552 4.504 - 4.527: 97.3558% ( 5) 00:14:32.552 4.527 - 4.551: 97.3638% ( 1) 00:14:32.552 4.551 - 4.575: 97.3798% ( 2) 00:14:32.552 4.575 - 4.599: 97.3878% ( 1) 00:14:32.552 4.599 - 4.622: 97.3958% ( 1) 00:14:32.552 4.622 - 4.646: 97.4037% ( 1) 00:14:32.552 4.646 - 4.670: 97.4197% ( 2) 00:14:32.552 4.693 - 4.717: 97.4517% ( 4) 00:14:32.552 4.717 - 4.741: 97.4597% ( 1) 00:14:32.552 4.741 - 4.764: 97.5076% ( 6) 00:14:32.552 4.764 - 4.788: 97.5395% ( 4) 00:14:32.552 4.788 - 4.812: 97.5555% ( 2) 00:14:32.552 4.812 - 4.836: 97.5875% ( 4) 00:14:32.552 4.836 - 4.859: 97.6354% ( 6) 00:14:32.552 4.859 - 4.883: 97.6913% ( 7) 00:14:32.552 4.883 - 4.907: 97.7712% ( 10) 00:14:32.552 4.907 - 4.930: 97.8511% ( 10) 00:14:32.552 4.930 - 4.954: 97.9230% ( 9) 00:14:32.552 4.954 - 4.978: 97.9549% ( 4) 00:14:32.552 4.978 - 5.001: 97.9629% ( 1) 00:14:32.552 5.001 - 5.025: 98.0029% ( 5) 00:14:32.552 5.025 - 5.049: 98.0189% ( 2) 00:14:32.552 5.049 - 5.073: 98.0428% ( 3) 00:14:32.552 5.073 - 5.096: 98.0748% ( 4) 00:14:32.552 5.096 - 5.120: 98.0828% ( 1) 00:14:32.552 5.120 - 5.144: 98.0987% ( 2) 00:14:32.552 5.144 - 5.167: 98.1387% ( 5) 00:14:32.552 5.167 - 5.191: 98.1547% ( 2) 00:14:32.552 5.191 - 5.215: 98.1946% ( 5) 00:14:32.552 5.215 - 5.239: 98.2186% ( 3) 00:14:32.552 5.239 - 5.262: 98.2665% ( 6) 00:14:32.552 5.262 - 5.286: 98.2745% ( 1) 00:14:32.552 5.286 - 5.310: 98.2825% ( 1) 00:14:32.552 5.310 - 5.333: 98.3144% ( 4) 00:14:32.552 5.357 - 5.381: 98.3304% ( 2) 00:14:32.552 5.381 - 5.404: 98.3384% ( 1) 00:14:32.552 5.428 - 5.452: 98.3464% ( 1) 00:14:32.552 5.499 - 5.523: 98.3544% ( 1) 00:14:32.552 5.618 - 5.641: 98.3624% ( 1) 00:14:32.552 5.736 - 5.760: 98.3703% ( 1) 00:14:32.552 5.760 - 5.784: 98.3783% ( 1) 00:14:32.552 5.879 - 5.902: 98.3863% ( 1) 00:14:32.552 5.902 - 5.926: 98.3943% ( 1) 00:14:32.552 5.950 - 5.973: 98.4023% ( 1) 00:14:32.552 6.305 - 6.353: 98.4103% ( 1) 00:14:32.552 6.400 - 6.447: 98.4183% ( 1) 00:14:32.552 6.637 - 6.684: 98.4263% ( 1) 00:14:32.552 6.827 - 6.874: 98.4343% ( 1) 00:14:32.552 7.396 - 7.443: 98.4422% ( 1) 00:14:32.552 7.538 - 7.585: 98.4502% ( 1) 00:14:32.552 7.585 - 7.633: 98.4582% ( 1) 00:14:32.552 7.822 - 7.870: 98.4662% ( 1) 00:14:32.552 7.917 - 7.964: 98.4822% ( 2) 00:14:32.552 8.012 - 8.059: 98.4902% ( 1) 00:14:32.552 8.059 - 8.107: 98.4982% ( 1) 00:14:32.552 8.154 - 8.201: 98.5141% ( 2) 00:14:32.552 8.201 - 8.249: 98.5301% ( 2) 00:14:32.552 8.296 - 8.344: 98.5381% ( 1) 00:14:32.552 8.344 - 8.391: 98.5461% ( 1) 00:14:32.552 8.439 - 8.486: 98.5541% ( 1) 00:14:32.552 8.486 - 8.533: 98.5621% ( 1) 00:14:32.552 8.533 - 8.581: 98.5701% ( 1) 00:14:32.552 8.628 - 8.676: 98.5780% ( 1) 00:14:32.552 8.676 - 8.723: 98.5860% ( 1) 00:14:32.552 8.723 - 8.770: 98.5940% ( 1) 00:14:32.552 8.770 - 8.818: 98.6100% ( 2) 00:14:32.552 8.865 - 8.913: 98.6180% ( 1) 00:14:32.552 8.913 - 8.960: 98.6260% ( 1) 00:14:32.552 9.007 - 9.055: 98.6340% ( 1) 00:14:32.552 9.150 - 9.197: 98.6420% ( 1) 00:14:32.552 9.339 - 9.387: 98.6499% ( 1) 00:14:32.552 9.529 - 9.576: 98.6579% ( 1) 00:14:32.552 9.719 - 9.766: 98.6659% ( 1) 00:14:32.552 9.766 - 9.813: 98.6739% ( 1) 00:14:32.552 9.861 - 9.908: 98.6819% ( 1) 00:14:32.552 9.908 - 9.956: 98.6899% ( 1) 00:14:32.552 10.098 - 10.145: 98.7218% ( 4) 00:14:32.552 10.240 - 10.287: 98.7298% ( 1) 00:14:32.552 10.335 - 10.382: 98.7378% ( 1) 00:14:32.552 10.809 - 10.856: 98.7458% ( 1) 00:14:32.552 10.951 - 10.999: 98.7538% ( 1) 00:14:32.552 11.046 - 11.093: 98.7618% ( 1) 00:14:32.552 11.093 - 11.141: 98.7698% ( 1) 00:14:32.552 11.236 - 11.283: 98.7778% ( 1) 00:14:32.552 11.947 - 11.994: 98.7857% ( 1) 00:14:32.552 11.994 - 12.041: 98.7937% ( 1) 00:14:32.552 12.136 - 12.231: 98.8017% ( 1) 00:14:32.552 12.516 - 12.610: 98.8097% ( 1) 00:14:32.552 13.084 - 13.179: 98.8177% ( 1) 00:14:32.552 13.274 - 13.369: 98.8257% ( 1) 00:14:32.552 13.653 - 13.748: 98.8417% ( 2) 00:14:32.552 14.033 - 14.127: 98.8497% ( 1) 00:14:32.552 14.127 - 14.222: 98.8576% ( 1) 00:14:32.552 14.412 - 14.507: 98.8656% ( 1) 00:14:32.552 14.601 - 14.696: 98.8736% ( 1) 00:14:32.552 14.791 - 14.886: 98.8816% ( 1) 00:14:32.552 15.076 - 15.170: 98.8896% ( 1) 00:14:32.552 16.877 - 16.972: 98.8976% ( 1) 00:14:32.552 17.067 - 17.161: 98.9056% ( 1) 00:14:32.552 17.161 - 17.256: 98.9295% ( 3) 00:14:32.552 17.256 - 17.351: 98.9455% ( 2) 00:14:32.552 17.351 - 17.446: 98.9695% ( 3) 00:14:32.552 17.446 - 17.541: 99.0014% ( 4) 00:14:32.552 17.541 - 17.636: 99.0574% ( 7) 00:14:32.552 17.636 - 17.730: 99.0733% ( 2) 00:14:32.552 17.730 - 17.825: 99.1053% ( 4) 00:14:32.552 17.825 - 17.920: 99.1532% ( 6) 00:14:32.552 17.920 - 18.015: 99.1772% ( 3) 00:14:32.552 18.015 - 18.110: 99.2491% ( 9) 00:14:32.552 18.110 - 18.204: 99.3370% ( 11) 00:14:32.552 18.204 - 18.299: 99.4009% ( 8) 00:14:32.552 18.299 - 18.394: 99.4728% ( 9) 00:14:32.552 18.394 - 18.489: 99.5287% ( 7) 00:14:32.552 18.489 - 18.584: 99.5766% ( 6) 00:14:32.552 18.584 - 18.679: 99.6565% ( 10) 00:14:32.552 18.679 - 18.773: 99.6725% ( 2) 00:14:32.552 18.773 - 18.868: 99.7044% ( 4) 00:14:32.552 18.868 - 18.963: 99.7284% ( 3) 00:14:32.552 18.963 - 19.058: 99.7444% ( 2) 00:14:32.552 19.058 - 19.153: 99.7524% ( 1) 00:14:32.552 19.153 - 19.247: 99.7603% ( 1) 00:14:32.552 19.342 - 19.437: 99.7683% ( 1) 00:14:32.552 19.627 - 19.721: 99.7843% ( 2) 00:14:32.552 20.006 - 20.101: 99.7923% ( 1) 00:14:32.552 22.281 - 22.376: 99.8003% ( 1) 00:14:32.552 22.566 - 22.661: 99.8083% ( 1) 00:14:32.552 22.945 - 23.040: 99.8163% ( 1) 00:14:32.552 23.419 - 23.514: 99.8243% ( 1) 00:14:32.552 23.514 - 23.609: 99.8402% ( 2) 00:14:32.552 23.893 - 23.988: 99.8562% ( 2) 00:14:32.552 24.462 - 24.652: 99.8642% ( 1) 00:14:32.552 25.221 - 25.410: 99.8722% ( 1) 00:14:32.552 28.065 - 28.255: 99.8802% ( 1) 00:14:32.552 78.886 - 79.265: 99.8882% ( 1) 00:14:32.552 3980.705 - 4004.978: 99.9680% ( 10) 00:14:32.552 4004.978 - 4029.250: 100.0000% ( 4) 00:14:32.552 00:14:32.552 Complete histogram 00:14:32.552 ================== 00:14:32.552 Range in us Cumulative Count 00:14:32.552 2.050 - 2.062: 0.0080% ( 1) 00:14:32.552 2.062 - 2.074: 7.2536% ( 907) 00:14:32.552 2.074 - 2.086: 31.0034% ( 2973) 00:14:32.552 2.086 - 2.098: 35.2293% ( 529) 00:14:32.552 2.098 - 2.110: 44.2163% ( 1125) 00:14:32.552 2.110 - 2.121: 54.8410% ( 1330) 00:14:32.552 2.121 - 2.133: 57.0139% ( 272) 00:14:32.552 2.133 - 2.145: 63.8680% ( 858) 00:14:32.552 2.145 - 2.157: 70.1310% ( 784) 00:14:32.552 2.157 - 2.169: 71.6408% ( 189) 00:14:32.552 2.169 - 2.181: 75.7389% ( 513) 00:14:32.552 2.181 - 2.193: 78.7746% ( 380) 00:14:32.552 2.193 - 2.204: 79.5335% ( 95) 00:14:32.552 2.204 - 2.216: 82.3215% ( 349) 00:14:32.552 2.216 - 2.228: 86.1559% ( 480) 00:14:32.552 2.228 - 2.240: 87.9454% ( 224) 00:14:32.552 2.240 - 2.252: 90.3020% ( 295) 00:14:32.552 2.252 - 2.264: 91.8517% ( 194) 00:14:32.552 2.264 - 2.276: 92.2272% ( 47) 00:14:32.552 2.276 - 2.287: 92.7704% ( 68) 00:14:32.552 2.287 - 2.299: 93.4255% ( 82) 00:14:32.552 2.299 - 2.311: 94.2483% ( 103) 00:14:32.552 2.311 - 2.323: 94.6317% ( 48) 00:14:32.552 2.323 - 2.335: 94.7036% ( 9) 00:14:32.552 2.335 - 2.347: 94.7356% ( 4) 00:14:32.552 2.347 - 2.359: 94.8394% ( 13) 00:14:32.553 2.359 - 2.370: 95.0152% ( 22) 00:14:32.553 2.370 - 2.382: 95.3347% ( 40) 00:14:32.553 2.382 - 2.394: 95.7022% ( 46) 00:14:32.553 2.394 - 2.406: 96.1176% ( 52) 00:14:32.553 2.406 - 2.418: 96.3413% ( 28) 00:14:32.553 2.418 - 2.430: 96.5809% ( 30) 00:14:32.553 2.430 - 2.441: 96.8366% ( 32) 00:14:32.553 2.441 - 2.453: 96.9803% ( 18) 00:14:32.553 2.453 - 2.465: 97.2280% ( 31) 00:14:32.553 2.465 - 2.477: 97.3958% ( 21) 00:14:32.553 2.477 - 2.489: 97.5395% ( 18) 00:14:32.553 2.489 - 2.501: 97.6354% ( 12) 00:14:32.553 2.501 - 2.513: 97.7952% ( 20) 00:14:32.553 2.513 - 2.524: 97.8751% ( 10) 00:14:32.553 2.524 - 2.536: 97.9709% ( 12) 00:14:32.553 2.536 - 2.548: 98.0029% ( 4) 00:14:32.553 2.548 - 2.560: 98.0588% ( 7) 00:14:32.553 2.560 - 2.572: 98.0987% ( 5) 00:14:32.553 2.572 - 2.584: 98.1307% ( 4) 00:14:32.553 2.584 - 2.596: 98.1706% ( 5) 00:14:32.553 2.596 - 2.607: 98.2026% ( 4) 00:14:32.553 2.607 - 2.619: 98.2186% ( 2) 00:14:32.553 2.619 - 2.631: 98.2266% ( 1) 00:14:32.553 2.631 - 2.643: 98.2425% ( 2) 00:14:32.553 2.643 - 2.655: 98.2665% ( 3) 00:14:32.553 2.655 - 2.667: 98.2745% ( 1) 00:14:32.553 2.679 - 2.690: 98.2905% ( 2) 00:14:32.553 2.690 - 2.702: 98.2985% ( 1) 00:14:32.553 2.726 - 2.738: 98.3064% ( 1) 00:14:32.553 2.738 - 2.750: 98.3224% ( 2) 00:14:32.553 2.761 - 2.773: 98.3304% ( 1) 00:14:32.553 2.785 - 2.797: 98.3384% ( 1) 00:14:32.553 2.844 - 2.856: 98.3464% ( 1) 00:14:32.553 2.904 - 2.916: 98.3544% ( 1) 00:14:32.553 2.975 - 2.987: 98.3624% ( 1) 00:14:32.553 3.058 - 3.081: 98.3703% ( 1) 00:14:32.553 3.153 - 3.176: 98.3783% ( 1) 00:14:32.553 3.200 - 3.224: 98.3943% ( 2) 00:14:32.553 3.319 - 3.342: 98.4023% ( 1) 00:14:32.553 3.461 - 3.484: 98.4103% ( 1) 00:14:32.553 3.508 - 3.532: 98.4343% ( 3) 00:14:32.553 3.556 - 3.579: 98.4502% ( 2) 00:14:32.553 3.721 - 3.745: 98.4582% ( 1) 00:14:32.553 3.745 - 3.769: 98.4662% ( 1) 00:14:32.553 3.769 - 3.793: 98.4742% ( 1) 00:14:32.553 3.793 - 3.816: 98.4822% ( 1) 00:14:32.553 3.887 - 3.911: 9[2024-12-10 22:45:40.195471] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:32.553 8.4902% ( 1) 00:14:32.553 3.911 - 3.935: 98.4982% ( 1) 00:14:32.553 3.959 - 3.982: 98.5062% ( 1) 00:14:32.553 4.006 - 4.030: 98.5141% ( 1) 00:14:32.553 4.053 - 4.077: 98.5221% ( 1) 00:14:32.553 5.594 - 5.618: 98.5301% ( 1) 00:14:32.553 5.736 - 5.760: 98.5381% ( 1) 00:14:32.553 5.807 - 5.831: 98.5461% ( 1) 00:14:32.553 5.831 - 5.855: 98.5541% ( 1) 00:14:32.553 5.855 - 5.879: 98.5621% ( 1) 00:14:32.553 5.997 - 6.021: 98.5701% ( 1) 00:14:32.553 6.021 - 6.044: 98.5860% ( 2) 00:14:32.553 6.116 - 6.163: 98.5940% ( 1) 00:14:32.553 6.258 - 6.305: 98.6020% ( 1) 00:14:32.553 6.447 - 6.495: 98.6100% ( 1) 00:14:32.553 6.779 - 6.827: 98.6180% ( 1) 00:14:32.553 6.827 - 6.874: 98.6260% ( 1) 00:14:32.553 6.874 - 6.921: 98.6340% ( 1) 00:14:32.553 6.969 - 7.016: 98.6420% ( 1) 00:14:32.553 7.443 - 7.490: 98.6499% ( 1) 00:14:32.553 7.490 - 7.538: 98.6659% ( 2) 00:14:32.553 7.633 - 7.680: 98.6739% ( 1) 00:14:32.553 8.012 - 8.059: 98.6819% ( 1) 00:14:32.553 9.387 - 9.434: 98.6899% ( 1) 00:14:32.553 11.188 - 11.236: 98.6979% ( 1) 00:14:32.553 15.550 - 15.644: 98.7059% ( 1) 00:14:32.553 15.739 - 15.834: 98.7298% ( 3) 00:14:32.553 15.929 - 16.024: 98.7618% ( 4) 00:14:32.553 16.024 - 16.119: 98.7857% ( 3) 00:14:32.553 16.119 - 16.213: 98.8257% ( 5) 00:14:32.553 16.213 - 16.308: 98.8736% ( 6) 00:14:32.553 16.308 - 16.403: 98.9136% ( 5) 00:14:32.553 16.403 - 16.498: 98.9455% ( 4) 00:14:32.553 16.498 - 16.593: 98.9775% ( 4) 00:14:32.553 16.593 - 16.687: 99.0094% ( 4) 00:14:32.553 16.687 - 16.782: 99.0574% ( 6) 00:14:32.553 16.782 - 16.877: 99.0893% ( 4) 00:14:32.553 16.877 - 16.972: 99.1293% ( 5) 00:14:32.553 16.972 - 17.067: 99.1372% ( 1) 00:14:32.553 17.067 - 17.161: 99.1532% ( 2) 00:14:32.553 17.161 - 17.256: 99.1692% ( 2) 00:14:32.553 17.256 - 17.351: 99.1852% ( 2) 00:14:32.553 17.351 - 17.446: 99.2012% ( 2) 00:14:32.553 17.541 - 17.636: 99.2091% ( 1) 00:14:32.553 17.636 - 17.730: 99.2171% ( 1) 00:14:32.553 17.730 - 17.825: 99.2251% ( 1) 00:14:32.553 17.825 - 17.920: 99.2331% ( 1) 00:14:32.553 17.920 - 18.015: 99.2491% ( 2) 00:14:32.553 18.110 - 18.204: 99.2730% ( 3) 00:14:32.553 18.204 - 18.299: 99.2810% ( 1) 00:14:32.553 18.299 - 18.394: 99.2970% ( 2) 00:14:32.553 18.489 - 18.584: 99.3210% ( 3) 00:14:32.553 21.618 - 21.713: 99.3290% ( 1) 00:14:32.553 22.850 - 22.945: 99.3370% ( 1) 00:14:32.553 23.514 - 23.609: 99.3449% ( 1) 00:14:32.553 3980.705 - 4004.978: 99.6805% ( 42) 00:14:32.553 4004.978 - 4029.250: 100.0000% ( 40) 00:14:32.553 00:14:32.553 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:32.553 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:32.553 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:32.553 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:32.553 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:32.812 [ 00:14:32.812 { 00:14:32.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:32.812 "subtype": "Discovery", 00:14:32.812 "listen_addresses": [], 00:14:32.812 "allow_any_host": true, 00:14:32.812 "hosts": [] 00:14:32.812 }, 00:14:32.812 { 00:14:32.812 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:32.812 "subtype": "NVMe", 00:14:32.812 "listen_addresses": [ 00:14:32.812 { 00:14:32.812 "trtype": "VFIOUSER", 00:14:32.812 "adrfam": "IPv4", 00:14:32.812 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:32.812 "trsvcid": "0" 00:14:32.812 } 00:14:32.812 ], 00:14:32.812 "allow_any_host": true, 00:14:32.812 "hosts": [], 00:14:32.812 "serial_number": "SPDK1", 00:14:32.812 "model_number": "SPDK bdev Controller", 00:14:32.812 "max_namespaces": 32, 00:14:32.812 "min_cntlid": 1, 00:14:32.812 "max_cntlid": 65519, 00:14:32.812 "namespaces": [ 00:14:32.812 { 00:14:32.812 "nsid": 1, 00:14:32.812 "bdev_name": "Malloc1", 00:14:32.812 "name": "Malloc1", 00:14:32.812 "nguid": "58187BAD49984DB8ADF2527C733D4387", 00:14:32.812 "uuid": "58187bad-4998-4db8-adf2-527c733d4387" 00:14:32.812 }, 00:14:32.812 { 00:14:32.812 "nsid": 2, 00:14:32.812 "bdev_name": "Malloc3", 00:14:32.812 "name": "Malloc3", 00:14:32.812 "nguid": "7407FC18B62841B180AF199873156F2F", 00:14:32.812 "uuid": "7407fc18-b628-41b1-80af-199873156f2f" 00:14:32.812 } 00:14:32.812 ] 00:14:32.812 }, 00:14:32.812 { 00:14:32.812 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:32.812 "subtype": "NVMe", 00:14:32.812 "listen_addresses": [ 00:14:32.812 { 00:14:32.812 "trtype": "VFIOUSER", 00:14:32.812 "adrfam": "IPv4", 00:14:32.812 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:32.812 "trsvcid": "0" 00:14:32.812 } 00:14:32.812 ], 00:14:32.812 "allow_any_host": true, 00:14:32.812 "hosts": [], 00:14:32.812 "serial_number": "SPDK2", 00:14:32.812 "model_number": "SPDK bdev Controller", 00:14:32.812 "max_namespaces": 32, 00:14:32.812 "min_cntlid": 1, 00:14:32.812 "max_cntlid": 65519, 00:14:32.812 "namespaces": [ 00:14:32.812 { 00:14:32.812 "nsid": 1, 00:14:32.812 "bdev_name": "Malloc2", 00:14:32.812 "name": "Malloc2", 00:14:32.812 "nguid": "D2FF75A2BDAB408F99A6FF72FEFF6705", 00:14:32.812 "uuid": "d2ff75a2-bdab-408f-99a6-ff72feff6705" 00:14:32.812 } 00:14:32.812 ] 00:14:32.812 } 00:14:32.812 ] 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=45724 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:32.812 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:33.071 [2024-12-10 22:45:40.692045] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:33.330 Malloc4 00:14:33.330 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:33.588 [2024-12-10 22:45:41.142449] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:33.588 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:33.588 Asynchronous Event Request test 00:14:33.588 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.588 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.588 Registering asynchronous event callbacks... 00:14:33.588 Starting namespace attribute notice tests for all controllers... 00:14:33.588 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:33.588 aer_cb - Changed Namespace 00:14:33.588 Cleaning up... 00:14:33.846 [ 00:14:33.846 { 00:14:33.846 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:33.846 "subtype": "Discovery", 00:14:33.846 "listen_addresses": [], 00:14:33.846 "allow_any_host": true, 00:14:33.846 "hosts": [] 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:33.846 "subtype": "NVMe", 00:14:33.846 "listen_addresses": [ 00:14:33.846 { 00:14:33.846 "trtype": "VFIOUSER", 00:14:33.846 "adrfam": "IPv4", 00:14:33.846 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:33.846 "trsvcid": "0" 00:14:33.846 } 00:14:33.846 ], 00:14:33.846 "allow_any_host": true, 00:14:33.846 "hosts": [], 00:14:33.846 "serial_number": "SPDK1", 00:14:33.846 "model_number": "SPDK bdev Controller", 00:14:33.846 "max_namespaces": 32, 00:14:33.846 "min_cntlid": 1, 00:14:33.846 "max_cntlid": 65519, 00:14:33.846 "namespaces": [ 00:14:33.846 { 00:14:33.846 "nsid": 1, 00:14:33.846 "bdev_name": "Malloc1", 00:14:33.846 "name": "Malloc1", 00:14:33.846 "nguid": "58187BAD49984DB8ADF2527C733D4387", 00:14:33.846 "uuid": "58187bad-4998-4db8-adf2-527c733d4387" 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "nsid": 2, 00:14:33.846 "bdev_name": "Malloc3", 00:14:33.846 "name": "Malloc3", 00:14:33.846 "nguid": "7407FC18B62841B180AF199873156F2F", 00:14:33.846 "uuid": "7407fc18-b628-41b1-80af-199873156f2f" 00:14:33.846 } 00:14:33.846 ] 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:33.846 "subtype": "NVMe", 00:14:33.846 "listen_addresses": [ 00:14:33.846 { 00:14:33.846 "trtype": "VFIOUSER", 00:14:33.846 "adrfam": "IPv4", 00:14:33.846 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:33.846 "trsvcid": "0" 00:14:33.846 } 00:14:33.846 ], 00:14:33.846 "allow_any_host": true, 00:14:33.846 "hosts": [], 00:14:33.846 "serial_number": "SPDK2", 00:14:33.846 "model_number": "SPDK bdev Controller", 00:14:33.846 "max_namespaces": 32, 00:14:33.846 "min_cntlid": 1, 00:14:33.846 "max_cntlid": 65519, 00:14:33.846 "namespaces": [ 00:14:33.846 { 00:14:33.846 "nsid": 1, 00:14:33.846 "bdev_name": "Malloc2", 00:14:33.846 "name": "Malloc2", 00:14:33.846 "nguid": "D2FF75A2BDAB408F99A6FF72FEFF6705", 00:14:33.846 "uuid": "d2ff75a2-bdab-408f-99a6-ff72feff6705" 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "nsid": 2, 00:14:33.846 "bdev_name": "Malloc4", 00:14:33.846 "name": "Malloc4", 00:14:33.846 "nguid": "68B8724CDC8A47FCB55265748804EC65", 00:14:33.846 "uuid": "68b8724c-dc8a-47fc-b552-65748804ec65" 00:14:33.846 } 00:14:33.846 ] 00:14:33.846 } 00:14:33.846 ] 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 45724 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 39499 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 39499 ']' 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 39499 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 39499 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 39499' 00:14:33.846 killing process with pid 39499 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 39499 00:14:33.846 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 39499 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=45868 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 45868' 00:14:34.104 Process pid: 45868 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 45868 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 45868 ']' 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.104 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:34.364 [2024-12-10 22:45:41.867013] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:34.364 [2024-12-10 22:45:41.868096] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:14:34.364 [2024-12-10 22:45:41.868158] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.364 [2024-12-10 22:45:41.933897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.364 [2024-12-10 22:45:41.993981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.364 [2024-12-10 22:45:41.994032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.364 [2024-12-10 22:45:41.994046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.364 [2024-12-10 22:45:41.994058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.364 [2024-12-10 22:45:41.994083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.364 [2024-12-10 22:45:41.995654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.364 [2024-12-10 22:45:41.995689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.364 [2024-12-10 22:45:41.995718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.364 [2024-12-10 22:45:41.995722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.364 [2024-12-10 22:45:42.085927] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:34.364 [2024-12-10 22:45:42.086138] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:34.364 [2024-12-10 22:45:42.086434] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:34.364 [2024-12-10 22:45:42.087079] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:34.364 [2024-12-10 22:45:42.087292] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:34.623 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.623 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:34.623 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:35.561 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:35.820 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:35.820 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:35.820 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.820 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:35.820 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:36.078 Malloc1 00:14:36.078 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:36.644 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:36.644 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:37.211 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.211 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:37.211 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:37.211 Malloc2 00:14:37.211 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:37.526 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:37.834 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 45868 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 45868 ']' 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 45868 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 45868 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 45868' 00:14:38.092 killing process with pid 45868 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 45868 00:14:38.092 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 45868 00:14:38.350 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:38.350 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:38.350 00:14:38.350 real 0m53.583s 00:14:38.350 user 3m26.917s 00:14:38.350 sys 0m3.960s 00:14:38.350 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.350 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:38.350 ************************************ 00:14:38.350 END TEST nvmf_vfio_user 00:14:38.350 ************************************ 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.609 ************************************ 00:14:38.609 START TEST nvmf_vfio_user_nvme_compliance 00:14:38.609 ************************************ 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:38.609 * Looking for test storage... 00:14:38.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.609 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.610 --rc genhtml_branch_coverage=1 00:14:38.610 --rc genhtml_function_coverage=1 00:14:38.610 --rc genhtml_legend=1 00:14:38.610 --rc geninfo_all_blocks=1 00:14:38.610 --rc geninfo_unexecuted_blocks=1 00:14:38.610 00:14:38.610 ' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.610 --rc genhtml_branch_coverage=1 00:14:38.610 --rc genhtml_function_coverage=1 00:14:38.610 --rc genhtml_legend=1 00:14:38.610 --rc geninfo_all_blocks=1 00:14:38.610 --rc geninfo_unexecuted_blocks=1 00:14:38.610 00:14:38.610 ' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.610 --rc genhtml_branch_coverage=1 00:14:38.610 --rc genhtml_function_coverage=1 00:14:38.610 --rc genhtml_legend=1 00:14:38.610 --rc geninfo_all_blocks=1 00:14:38.610 --rc geninfo_unexecuted_blocks=1 00:14:38.610 00:14:38.610 ' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.610 --rc genhtml_branch_coverage=1 00:14:38.610 --rc genhtml_function_coverage=1 00:14:38.610 --rc genhtml_legend=1 00:14:38.610 --rc geninfo_all_blocks=1 00:14:38.610 --rc geninfo_unexecuted_blocks=1 00:14:38.610 00:14:38.610 ' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=46480 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 46480' 00:14:38.610 Process pid: 46480 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 46480 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 46480 ']' 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.610 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:38.870 [2024-12-10 22:45:46.350917] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:14:38.870 [2024-12-10 22:45:46.351000] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.870 [2024-12-10 22:45:46.420810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.870 [2024-12-10 22:45:46.480934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.870 [2024-12-10 22:45:46.480990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.870 [2024-12-10 22:45:46.481018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.870 [2024-12-10 22:45:46.481030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.870 [2024-12-10 22:45:46.481040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.870 [2024-12-10 22:45:46.482448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.870 [2024-12-10 22:45:46.482513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.870 [2024-12-10 22:45:46.482517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.128 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.128 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:39.128 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 malloc0 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.068 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:40.329 00:14:40.329 00:14:40.329 CUnit - A unit testing framework for C - Version 2.1-3 00:14:40.329 http://cunit.sourceforge.net/ 00:14:40.329 00:14:40.329 00:14:40.329 Suite: nvme_compliance 00:14:40.329 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 22:45:47.855365] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.329 [2024-12-10 22:45:47.856941] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:40.329 [2024-12-10 22:45:47.856965] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:40.329 [2024-12-10 22:45:47.856993] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:40.329 [2024-12-10 22:45:47.858382] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.329 passed 00:14:40.329 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 22:45:47.940970] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.329 [2024-12-10 22:45:47.943994] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.329 passed 00:14:40.329 Test: admin_identify_ns ...[2024-12-10 22:45:48.032133] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.587 [2024-12-10 22:45:48.091562] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:40.587 [2024-12-10 22:45:48.099575] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:40.587 [2024-12-10 22:45:48.120692] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.587 passed 00:14:40.587 Test: admin_get_features_mandatory_features ...[2024-12-10 22:45:48.204386] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.587 [2024-12-10 22:45:48.207413] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.587 passed 00:14:40.587 Test: admin_get_features_optional_features ...[2024-12-10 22:45:48.292009] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.587 [2024-12-10 22:45:48.295038] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.844 passed 00:14:40.844 Test: admin_set_features_number_of_queues ...[2024-12-10 22:45:48.379216] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.844 [2024-12-10 22:45:48.484678] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.844 passed 00:14:40.844 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 22:45:48.568379] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.844 [2024-12-10 22:45:48.571403] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.102 passed 00:14:41.102 Test: admin_get_log_page_with_lpo ...[2024-12-10 22:45:48.651644] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.102 [2024-12-10 22:45:48.720561] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:41.102 [2024-12-10 22:45:48.733628] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.102 passed 00:14:41.102 Test: fabric_property_get ...[2024-12-10 22:45:48.817206] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.102 [2024-12-10 22:45:48.818482] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:41.102 [2024-12-10 22:45:48.820226] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.361 passed 00:14:41.361 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 22:45:48.903751] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.361 [2024-12-10 22:45:48.905083] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:41.361 [2024-12-10 22:45:48.906773] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.361 passed 00:14:41.361 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 22:45:48.989922] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.361 [2024-12-10 22:45:49.073565] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:41.361 [2024-12-10 22:45:49.089559] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:41.621 [2024-12-10 22:45:49.094687] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.621 passed 00:14:41.621 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 22:45:49.181096] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.621 [2024-12-10 22:45:49.182372] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:41.621 [2024-12-10 22:45:49.184116] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.621 passed 00:14:41.621 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 22:45:49.266216] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.621 [2024-12-10 22:45:49.342576] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:41.881 [2024-12-10 22:45:49.366574] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:41.881 [2024-12-10 22:45:49.371645] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.881 passed 00:14:41.881 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 22:45:49.454151] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.881 [2024-12-10 22:45:49.455437] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:41.881 [2024-12-10 22:45:49.455490] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:41.881 [2024-12-10 22:45:49.457176] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.881 passed 00:14:41.881 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 22:45:49.540455] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.139 [2024-12-10 22:45:49.639570] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:42.139 [2024-12-10 22:45:49.647557] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:42.139 [2024-12-10 22:45:49.655570] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:42.139 [2024-12-10 22:45:49.663567] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:42.139 [2024-12-10 22:45:49.692657] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:42.139 passed 00:14:42.139 Test: admin_create_io_sq_verify_pc ...[2024-12-10 22:45:49.776331] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.139 [2024-12-10 22:45:49.790572] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:42.139 [2024-12-10 22:45:49.807741] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:42.139 passed 00:14:42.397 Test: admin_create_io_qp_max_qps ...[2024-12-10 22:45:49.891301] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.332 [2024-12-10 22:45:51.008566] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:43.908 [2024-12-10 22:45:51.396497] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.908 passed 00:14:43.908 Test: admin_create_io_sq_shared_cq ...[2024-12-10 22:45:51.479141] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.908 [2024-12-10 22:45:51.609572] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:44.168 [2024-12-10 22:45:51.646661] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:44.168 passed 00:14:44.168 00:14:44.168 Run Summary: Type Total Ran Passed Failed Inactive 00:14:44.168 suites 1 1 n/a 0 0 00:14:44.168 tests 18 18 18 0 0 00:14:44.168 asserts 360 360 360 0 n/a 00:14:44.168 00:14:44.168 Elapsed time = 1.572 seconds 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 46480 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 46480 ']' 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 46480 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 46480 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 46480' 00:14:44.168 killing process with pid 46480 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 46480 00:14:44.168 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 46480 00:14:44.425 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:44.425 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:44.425 00:14:44.425 real 0m5.874s 00:14:44.425 user 0m16.480s 00:14:44.425 sys 0m0.568s 00:14:44.425 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.425 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:44.425 ************************************ 00:14:44.425 END TEST nvmf_vfio_user_nvme_compliance 00:14:44.425 ************************************ 00:14:44.425 22:45:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:44.425 22:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:44.426 22:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.426 22:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.426 ************************************ 00:14:44.426 START TEST nvmf_vfio_user_fuzz 00:14:44.426 ************************************ 00:14:44.426 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:44.426 * Looking for test storage... 00:14:44.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.426 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:44.426 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:44.426 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:44.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.686 --rc genhtml_branch_coverage=1 00:14:44.686 --rc genhtml_function_coverage=1 00:14:44.686 --rc genhtml_legend=1 00:14:44.686 --rc geninfo_all_blocks=1 00:14:44.686 --rc geninfo_unexecuted_blocks=1 00:14:44.686 00:14:44.686 ' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:44.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.686 --rc genhtml_branch_coverage=1 00:14:44.686 --rc genhtml_function_coverage=1 00:14:44.686 --rc genhtml_legend=1 00:14:44.686 --rc geninfo_all_blocks=1 00:14:44.686 --rc geninfo_unexecuted_blocks=1 00:14:44.686 00:14:44.686 ' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:44.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.686 --rc genhtml_branch_coverage=1 00:14:44.686 --rc genhtml_function_coverage=1 00:14:44.686 --rc genhtml_legend=1 00:14:44.686 --rc geninfo_all_blocks=1 00:14:44.686 --rc geninfo_unexecuted_blocks=1 00:14:44.686 00:14:44.686 ' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:44.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.686 --rc genhtml_branch_coverage=1 00:14:44.686 --rc genhtml_function_coverage=1 00:14:44.686 --rc genhtml_legend=1 00:14:44.686 --rc geninfo_all_blocks=1 00:14:44.686 --rc geninfo_unexecuted_blocks=1 00:14:44.686 00:14:44.686 ' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.686 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=47302 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 47302' 00:14:44.687 Process pid: 47302 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 47302 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 47302 ']' 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.687 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:44.945 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.945 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:44.945 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.883 malloc0 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:45.883 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:17.960 Fuzzing completed. Shutting down the fuzz application 00:15:17.960 00:15:17.960 Dumping successful admin opcodes: 00:15:17.960 9, 10, 00:15:17.960 Dumping successful io opcodes: 00:15:17.960 0, 00:15:17.960 NS: 0x20000081ef00 I/O qp, Total commands completed: 675487, total successful commands: 2628, random_seed: 2186544128 00:15:17.960 NS: 0x20000081ef00 admin qp, Total commands completed: 165536, total successful commands: 37, random_seed: 1734954560 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 47302 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 47302 ']' 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 47302 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.960 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47302 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47302' 00:15:17.960 killing process with pid 47302 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 47302 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 47302 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:17.960 00:15:17.960 real 0m32.272s 00:15:17.960 user 0m34.394s 00:15:17.960 sys 0m25.833s 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.960 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 ************************************ 00:15:17.960 END TEST nvmf_vfio_user_fuzz 00:15:17.961 ************************************ 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.961 ************************************ 00:15:17.961 START TEST nvmf_auth_target 00:15:17.961 ************************************ 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:17.961 * Looking for test storage... 00:15:17.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:17.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.961 --rc genhtml_branch_coverage=1 00:15:17.961 --rc genhtml_function_coverage=1 00:15:17.961 --rc genhtml_legend=1 00:15:17.961 --rc geninfo_all_blocks=1 00:15:17.961 --rc geninfo_unexecuted_blocks=1 00:15:17.961 00:15:17.961 ' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:17.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.961 --rc genhtml_branch_coverage=1 00:15:17.961 --rc genhtml_function_coverage=1 00:15:17.961 --rc genhtml_legend=1 00:15:17.961 --rc geninfo_all_blocks=1 00:15:17.961 --rc geninfo_unexecuted_blocks=1 00:15:17.961 00:15:17.961 ' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:17.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.961 --rc genhtml_branch_coverage=1 00:15:17.961 --rc genhtml_function_coverage=1 00:15:17.961 --rc genhtml_legend=1 00:15:17.961 --rc geninfo_all_blocks=1 00:15:17.961 --rc geninfo_unexecuted_blocks=1 00:15:17.961 00:15:17.961 ' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:17.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.961 --rc genhtml_branch_coverage=1 00:15:17.961 --rc genhtml_function_coverage=1 00:15:17.961 --rc genhtml_legend=1 00:15:17.961 --rc geninfo_all_blocks=1 00:15:17.961 --rc geninfo_unexecuted_blocks=1 00:15:17.961 00:15:17.961 ' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.961 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:17.962 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:19.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:19.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:19.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.337 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:19.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:19.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:15:19.338 00:15:19.338 --- 10.0.0.2 ping statistics --- 00:15:19.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.338 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:15:19.338 00:15:19.338 --- 10.0.0.1 ping statistics --- 00:15:19.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.338 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=52664 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 52664 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 52664 ']' 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.338 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=52688 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cd9e07ad5f53d4e6be7ce6974d1facaa56c2e6ac9b1aecc8 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dzu 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cd9e07ad5f53d4e6be7ce6974d1facaa56c2e6ac9b1aecc8 0 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cd9e07ad5f53d4e6be7ce6974d1facaa56c2e6ac9b1aecc8 0 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cd9e07ad5f53d4e6be7ce6974d1facaa56c2e6ac9b1aecc8 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:19.596 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dzu 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dzu 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.dzu 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=99aee7f26af55232e394c9577400ebbec188863aa9168e5c55cbaa8795b0f595 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.W2y 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 99aee7f26af55232e394c9577400ebbec188863aa9168e5c55cbaa8795b0f595 3 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 99aee7f26af55232e394c9577400ebbec188863aa9168e5c55cbaa8795b0f595 3 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=99aee7f26af55232e394c9577400ebbec188863aa9168e5c55cbaa8795b0f595 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.W2y 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.W2y 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.W2y 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c4a8bfb7a3e5aed78afb642cd136b6a7 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.R6W 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c4a8bfb7a3e5aed78afb642cd136b6a7 1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c4a8bfb7a3e5aed78afb642cd136b6a7 1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c4a8bfb7a3e5aed78afb642cd136b6a7 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.R6W 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.R6W 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.R6W 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ec5baf9dbd919c0006d36a996c3d87a8ed16d6151c0d557a 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.M0Q 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ec5baf9dbd919c0006d36a996c3d87a8ed16d6151c0d557a 2 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ec5baf9dbd919c0006d36a996c3d87a8ed16d6151c0d557a 2 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ec5baf9dbd919c0006d36a996c3d87a8ed16d6151c0d557a 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.M0Q 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.M0Q 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.M0Q 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4ae6b7a9d2570b89ffb4469f6294e00d9ceed396b6c8049b 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ULb 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4ae6b7a9d2570b89ffb4469f6294e00d9ceed396b6c8049b 2 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4ae6b7a9d2570b89ffb4469f6294e00d9ceed396b6c8049b 2 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4ae6b7a9d2570b89ffb4469f6294e00d9ceed396b6c8049b 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ULb 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ULb 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ULb 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8c94166c32a7d0fce7ba7914034b0899 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8E8 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8c94166c32a7d0fce7ba7914034b0899 1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8c94166c32a7d0fce7ba7914034b0899 1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8c94166c32a7d0fce7ba7914034b0899 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:19.855 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8E8 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8E8 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.8E8 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:20.113 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c950ef5f5c227a8d4fef752748da9b548394ae707f7b17e29a3393552f2b3830 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4kf 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c950ef5f5c227a8d4fef752748da9b548394ae707f7b17e29a3393552f2b3830 3 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c950ef5f5c227a8d4fef752748da9b548394ae707f7b17e29a3393552f2b3830 3 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c950ef5f5c227a8d4fef752748da9b548394ae707f7b17e29a3393552f2b3830 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4kf 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4kf 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.4kf 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 52664 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 52664 ']' 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.114 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 52688 /var/tmp/host.sock 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 52688 ']' 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:20.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.371 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dzu 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.629 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.dzu 00:15:20.630 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.dzu 00:15:20.887 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.W2y ]] 00:15:20.887 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2y 00:15:20.887 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.887 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.887 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.887 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2y 00:15:20.887 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2y 00:15:21.145 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:21.145 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.R6W 00:15:21.145 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.145 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.145 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.145 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.R6W 00:15:21.145 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.R6W 00:15:21.403 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.M0Q ]] 00:15:21.403 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M0Q 00:15:21.403 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.403 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.403 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.403 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M0Q 00:15:21.403 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M0Q 00:15:21.662 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:21.662 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ULb 00:15:21.662 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.662 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.662 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.662 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ULb 00:15:21.662 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ULb 00:15:21.920 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.8E8 ]] 00:15:21.920 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E8 00:15:21.920 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.920 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.920 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.920 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E8 00:15:21.920 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E8 00:15:22.179 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:22.179 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4kf 00:15:22.179 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.179 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.179 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.179 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4kf 00:15:22.179 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4kf 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.745 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.313 00:15:23.313 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.313 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.313 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.571 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.571 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.571 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.571 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.571 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.571 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.571 { 00:15:23.571 "cntlid": 1, 00:15:23.571 "qid": 0, 00:15:23.571 "state": "enabled", 00:15:23.571 "thread": "nvmf_tgt_poll_group_000", 00:15:23.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:23.572 "listen_address": { 00:15:23.572 "trtype": "TCP", 00:15:23.572 "adrfam": "IPv4", 00:15:23.572 "traddr": "10.0.0.2", 00:15:23.572 "trsvcid": "4420" 00:15:23.572 }, 00:15:23.572 "peer_address": { 00:15:23.572 "trtype": "TCP", 00:15:23.572 "adrfam": "IPv4", 00:15:23.572 "traddr": "10.0.0.1", 00:15:23.572 "trsvcid": "37464" 00:15:23.572 }, 00:15:23.572 "auth": { 00:15:23.572 "state": "completed", 00:15:23.572 "digest": "sha256", 00:15:23.572 "dhgroup": "null" 00:15:23.572 } 00:15:23.572 } 00:15:23.572 ]' 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.572 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.829 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:23.829 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:24.764 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.765 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.765 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.765 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.765 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.765 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.765 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.765 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.023 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.281 00:15:25.281 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.281 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.281 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.540 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.540 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.540 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.540 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.798 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.798 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.798 { 00:15:25.798 "cntlid": 3, 00:15:25.798 "qid": 0, 00:15:25.798 "state": "enabled", 00:15:25.798 "thread": "nvmf_tgt_poll_group_000", 00:15:25.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:25.798 "listen_address": { 00:15:25.798 "trtype": "TCP", 00:15:25.798 "adrfam": "IPv4", 00:15:25.798 "traddr": "10.0.0.2", 00:15:25.798 "trsvcid": "4420" 00:15:25.798 }, 00:15:25.798 "peer_address": { 00:15:25.798 "trtype": "TCP", 00:15:25.798 "adrfam": "IPv4", 00:15:25.798 "traddr": "10.0.0.1", 00:15:25.798 "trsvcid": "38554" 00:15:25.798 }, 00:15:25.798 "auth": { 00:15:25.798 "state": "completed", 00:15:25.798 "digest": "sha256", 00:15:25.798 "dhgroup": "null" 00:15:25.798 } 00:15:25.798 } 00:15:25.798 ]' 00:15:25.798 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.798 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.798 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.799 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:25.799 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.799 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.799 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.799 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.056 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:26.057 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:26.995 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.251 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.508 00:15:27.509 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.509 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.509 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.766 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.766 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.767 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.767 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.767 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.767 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.767 { 00:15:27.767 "cntlid": 5, 00:15:27.767 "qid": 0, 00:15:27.767 "state": "enabled", 00:15:27.767 "thread": "nvmf_tgt_poll_group_000", 00:15:27.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:27.767 "listen_address": { 00:15:27.767 "trtype": "TCP", 00:15:27.767 "adrfam": "IPv4", 00:15:27.767 "traddr": "10.0.0.2", 00:15:27.767 "trsvcid": "4420" 00:15:27.767 }, 00:15:27.767 "peer_address": { 00:15:27.767 "trtype": "TCP", 00:15:27.767 "adrfam": "IPv4", 00:15:27.767 "traddr": "10.0.0.1", 00:15:27.767 "trsvcid": "38578" 00:15:27.767 }, 00:15:27.767 "auth": { 00:15:27.767 "state": "completed", 00:15:27.767 "digest": "sha256", 00:15:27.767 "dhgroup": "null" 00:15:27.767 } 00:15:27.767 } 00:15:27.767 ]' 00:15:27.767 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.767 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.767 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.024 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:28.024 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.024 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.024 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.024 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.282 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:28.282 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.217 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.475 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.733 00:15:29.733 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.733 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.733 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.991 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.991 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.991 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.991 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.991 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.991 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.991 { 00:15:29.991 "cntlid": 7, 00:15:29.991 "qid": 0, 00:15:29.992 "state": "enabled", 00:15:29.992 "thread": "nvmf_tgt_poll_group_000", 00:15:29.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:29.992 "listen_address": { 00:15:29.992 "trtype": "TCP", 00:15:29.992 "adrfam": "IPv4", 00:15:29.992 "traddr": "10.0.0.2", 00:15:29.992 "trsvcid": "4420" 00:15:29.992 }, 00:15:29.992 "peer_address": { 00:15:29.992 "trtype": "TCP", 00:15:29.992 "adrfam": "IPv4", 00:15:29.992 "traddr": "10.0.0.1", 00:15:29.992 "trsvcid": "38608" 00:15:29.992 }, 00:15:29.992 "auth": { 00:15:29.992 "state": "completed", 00:15:29.992 "digest": "sha256", 00:15:29.992 "dhgroup": "null" 00:15:29.992 } 00:15:29.992 } 00:15:29.992 ]' 00:15:29.992 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.992 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.992 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.249 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.249 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.249 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.249 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.249 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.509 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:30.509 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:31.495 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.772 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.030 00:15:32.030 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.030 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.030 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.287 { 00:15:32.287 "cntlid": 9, 00:15:32.287 "qid": 0, 00:15:32.287 "state": "enabled", 00:15:32.287 "thread": "nvmf_tgt_poll_group_000", 00:15:32.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:32.287 "listen_address": { 00:15:32.287 "trtype": "TCP", 00:15:32.287 "adrfam": "IPv4", 00:15:32.287 "traddr": "10.0.0.2", 00:15:32.287 "trsvcid": "4420" 00:15:32.287 }, 00:15:32.287 "peer_address": { 00:15:32.287 "trtype": "TCP", 00:15:32.287 "adrfam": "IPv4", 00:15:32.287 "traddr": "10.0.0.1", 00:15:32.287 "trsvcid": "38628" 00:15:32.287 }, 00:15:32.287 "auth": { 00:15:32.287 "state": "completed", 00:15:32.287 "digest": "sha256", 00:15:32.287 "dhgroup": "ffdhe2048" 00:15:32.287 } 00:15:32.287 } 00:15:32.287 ]' 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.287 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.546 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:32.546 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:33.484 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.743 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.743 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.743 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.743 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.743 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.743 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:33.743 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.001 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.259 00:15:34.259 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.259 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.259 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.518 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.518 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.518 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.518 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.518 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.518 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.518 { 00:15:34.518 "cntlid": 11, 00:15:34.518 "qid": 0, 00:15:34.518 "state": "enabled", 00:15:34.518 "thread": "nvmf_tgt_poll_group_000", 00:15:34.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:34.518 "listen_address": { 00:15:34.518 "trtype": "TCP", 00:15:34.518 "adrfam": "IPv4", 00:15:34.518 "traddr": "10.0.0.2", 00:15:34.518 "trsvcid": "4420" 00:15:34.518 }, 00:15:34.518 "peer_address": { 00:15:34.518 "trtype": "TCP", 00:15:34.518 "adrfam": "IPv4", 00:15:34.518 "traddr": "10.0.0.1", 00:15:34.518 "trsvcid": "38658" 00:15:34.518 }, 00:15:34.518 "auth": { 00:15:34.518 "state": "completed", 00:15:34.518 "digest": "sha256", 00:15:34.518 "dhgroup": "ffdhe2048" 00:15:34.518 } 00:15:34.518 } 00:15:34.518 ]' 00:15:34.518 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.776 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.776 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.776 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.776 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.776 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.776 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.776 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.034 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:35.034 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:35.971 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.229 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.487 00:15:36.487 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.487 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.487 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.745 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.745 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.745 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.745 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.745 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.745 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.745 { 00:15:36.745 "cntlid": 13, 00:15:36.745 "qid": 0, 00:15:36.745 "state": "enabled", 00:15:36.745 "thread": "nvmf_tgt_poll_group_000", 00:15:36.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:36.745 "listen_address": { 00:15:36.745 "trtype": "TCP", 00:15:36.745 "adrfam": "IPv4", 00:15:36.745 "traddr": "10.0.0.2", 00:15:36.745 "trsvcid": "4420" 00:15:36.745 }, 00:15:36.745 "peer_address": { 00:15:36.745 "trtype": "TCP", 00:15:36.745 "adrfam": "IPv4", 00:15:36.745 "traddr": "10.0.0.1", 00:15:36.745 "trsvcid": "41140" 00:15:36.745 }, 00:15:36.745 "auth": { 00:15:36.745 "state": "completed", 00:15:36.745 "digest": "sha256", 00:15:36.745 "dhgroup": "ffdhe2048" 00:15:36.745 } 00:15:36.745 } 00:15:36.745 ]' 00:15:36.745 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.003 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.003 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.003 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.003 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.003 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.003 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.003 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.261 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:37.261 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:38.197 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.455 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.022 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.022 { 00:15:39.022 "cntlid": 15, 00:15:39.022 "qid": 0, 00:15:39.022 "state": "enabled", 00:15:39.022 "thread": "nvmf_tgt_poll_group_000", 00:15:39.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:39.022 "listen_address": { 00:15:39.022 "trtype": "TCP", 00:15:39.022 "adrfam": "IPv4", 00:15:39.022 "traddr": "10.0.0.2", 00:15:39.022 "trsvcid": "4420" 00:15:39.022 }, 00:15:39.022 "peer_address": { 00:15:39.022 "trtype": "TCP", 00:15:39.022 "adrfam": "IPv4", 00:15:39.022 "traddr": "10.0.0.1", 00:15:39.022 "trsvcid": "41154" 00:15:39.022 }, 00:15:39.022 "auth": { 00:15:39.022 "state": "completed", 00:15:39.022 "digest": "sha256", 00:15:39.022 "dhgroup": "ffdhe2048" 00:15:39.022 } 00:15:39.022 } 00:15:39.022 ]' 00:15:39.022 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.280 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.280 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.280 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.280 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.280 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.280 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.280 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.537 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:39.537 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:40.473 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.731 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.297 00:15:41.297 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.297 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.297 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.555 { 00:15:41.555 "cntlid": 17, 00:15:41.555 "qid": 0, 00:15:41.555 "state": "enabled", 00:15:41.555 "thread": "nvmf_tgt_poll_group_000", 00:15:41.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:41.555 "listen_address": { 00:15:41.555 "trtype": "TCP", 00:15:41.555 "adrfam": "IPv4", 00:15:41.555 "traddr": "10.0.0.2", 00:15:41.555 "trsvcid": "4420" 00:15:41.555 }, 00:15:41.555 "peer_address": { 00:15:41.555 "trtype": "TCP", 00:15:41.555 "adrfam": "IPv4", 00:15:41.555 "traddr": "10.0.0.1", 00:15:41.555 "trsvcid": "41188" 00:15:41.555 }, 00:15:41.555 "auth": { 00:15:41.555 "state": "completed", 00:15:41.555 "digest": "sha256", 00:15:41.555 "dhgroup": "ffdhe3072" 00:15:41.555 } 00:15:41.555 } 00:15:41.555 ]' 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.555 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.814 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:41.814 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:42.751 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.008 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.573 00:15:43.573 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.573 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.573 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.831 { 00:15:43.831 "cntlid": 19, 00:15:43.831 "qid": 0, 00:15:43.831 "state": "enabled", 00:15:43.831 "thread": "nvmf_tgt_poll_group_000", 00:15:43.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:43.831 "listen_address": { 00:15:43.831 "trtype": "TCP", 00:15:43.831 "adrfam": "IPv4", 00:15:43.831 "traddr": "10.0.0.2", 00:15:43.831 "trsvcid": "4420" 00:15:43.831 }, 00:15:43.831 "peer_address": { 00:15:43.831 "trtype": "TCP", 00:15:43.831 "adrfam": "IPv4", 00:15:43.831 "traddr": "10.0.0.1", 00:15:43.831 "trsvcid": "41216" 00:15:43.831 }, 00:15:43.831 "auth": { 00:15:43.831 "state": "completed", 00:15:43.831 "digest": "sha256", 00:15:43.831 "dhgroup": "ffdhe3072" 00:15:43.831 } 00:15:43.831 } 00:15:43.831 ]' 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.831 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.090 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:44.090 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.024 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.283 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.544 00:15:45.544 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.544 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.544 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.804 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.804 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.804 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.804 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.062 { 00:15:46.062 "cntlid": 21, 00:15:46.062 "qid": 0, 00:15:46.062 "state": "enabled", 00:15:46.062 "thread": "nvmf_tgt_poll_group_000", 00:15:46.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:46.062 "listen_address": { 00:15:46.062 "trtype": "TCP", 00:15:46.062 "adrfam": "IPv4", 00:15:46.062 "traddr": "10.0.0.2", 00:15:46.062 "trsvcid": "4420" 00:15:46.062 }, 00:15:46.062 "peer_address": { 00:15:46.062 "trtype": "TCP", 00:15:46.062 "adrfam": "IPv4", 00:15:46.062 "traddr": "10.0.0.1", 00:15:46.062 "trsvcid": "33122" 00:15:46.062 }, 00:15:46.062 "auth": { 00:15:46.062 "state": "completed", 00:15:46.062 "digest": "sha256", 00:15:46.062 "dhgroup": "ffdhe3072" 00:15:46.062 } 00:15:46.062 } 00:15:46.062 ]' 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.062 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.320 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:46.320 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:47.257 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.258 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.258 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.258 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.258 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.258 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.258 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.258 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.515 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.773 00:15:47.773 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.773 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.773 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.030 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.030 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.030 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.030 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.030 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.030 { 00:15:48.030 "cntlid": 23, 00:15:48.030 "qid": 0, 00:15:48.030 "state": "enabled", 00:15:48.030 "thread": "nvmf_tgt_poll_group_000", 00:15:48.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:48.030 "listen_address": { 00:15:48.030 "trtype": "TCP", 00:15:48.030 "adrfam": "IPv4", 00:15:48.030 "traddr": "10.0.0.2", 00:15:48.030 "trsvcid": "4420" 00:15:48.030 }, 00:15:48.030 "peer_address": { 00:15:48.030 "trtype": "TCP", 00:15:48.030 "adrfam": "IPv4", 00:15:48.030 "traddr": "10.0.0.1", 00:15:48.030 "trsvcid": "33144" 00:15:48.031 }, 00:15:48.031 "auth": { 00:15:48.031 "state": "completed", 00:15:48.031 "digest": "sha256", 00:15:48.031 "dhgroup": "ffdhe3072" 00:15:48.031 } 00:15:48.031 } 00:15:48.031 ]' 00:15:48.031 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.289 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.289 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.289 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.289 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.289 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.289 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.289 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.546 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:48.546 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:49.485 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.743 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.309 00:15:50.309 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.309 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.309 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.567 { 00:15:50.567 "cntlid": 25, 00:15:50.567 "qid": 0, 00:15:50.567 "state": "enabled", 00:15:50.567 "thread": "nvmf_tgt_poll_group_000", 00:15:50.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:50.567 "listen_address": { 00:15:50.567 "trtype": "TCP", 00:15:50.567 "adrfam": "IPv4", 00:15:50.567 "traddr": "10.0.0.2", 00:15:50.567 "trsvcid": "4420" 00:15:50.567 }, 00:15:50.567 "peer_address": { 00:15:50.567 "trtype": "TCP", 00:15:50.567 "adrfam": "IPv4", 00:15:50.567 "traddr": "10.0.0.1", 00:15:50.567 "trsvcid": "33166" 00:15:50.567 }, 00:15:50.567 "auth": { 00:15:50.567 "state": "completed", 00:15:50.567 "digest": "sha256", 00:15:50.567 "dhgroup": "ffdhe4096" 00:15:50.567 } 00:15:50.567 } 00:15:50.567 ]' 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.567 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.827 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:50.827 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:51.762 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.020 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.586 00:15:52.586 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.586 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.586 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.844 { 00:15:52.844 "cntlid": 27, 00:15:52.844 "qid": 0, 00:15:52.844 "state": "enabled", 00:15:52.844 "thread": "nvmf_tgt_poll_group_000", 00:15:52.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:52.844 "listen_address": { 00:15:52.844 "trtype": "TCP", 00:15:52.844 "adrfam": "IPv4", 00:15:52.844 "traddr": "10.0.0.2", 00:15:52.844 "trsvcid": "4420" 00:15:52.844 }, 00:15:52.844 "peer_address": { 00:15:52.844 "trtype": "TCP", 00:15:52.844 "adrfam": "IPv4", 00:15:52.844 "traddr": "10.0.0.1", 00:15:52.844 "trsvcid": "33190" 00:15:52.844 }, 00:15:52.844 "auth": { 00:15:52.844 "state": "completed", 00:15:52.844 "digest": "sha256", 00:15:52.844 "dhgroup": "ffdhe4096" 00:15:52.844 } 00:15:52.844 } 00:15:52.844 ]' 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.844 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.103 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:53.103 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:15:54.039 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.039 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.039 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.039 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.040 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.040 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.040 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.040 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.297 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.298 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.298 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.298 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.298 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.298 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.866 00:15:54.866 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.866 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.866 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.866 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.170 { 00:15:55.170 "cntlid": 29, 00:15:55.170 "qid": 0, 00:15:55.170 "state": "enabled", 00:15:55.170 "thread": "nvmf_tgt_poll_group_000", 00:15:55.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:55.170 "listen_address": { 00:15:55.170 "trtype": "TCP", 00:15:55.170 "adrfam": "IPv4", 00:15:55.170 "traddr": "10.0.0.2", 00:15:55.170 "trsvcid": "4420" 00:15:55.170 }, 00:15:55.170 "peer_address": { 00:15:55.170 "trtype": "TCP", 00:15:55.170 "adrfam": "IPv4", 00:15:55.170 "traddr": "10.0.0.1", 00:15:55.170 "trsvcid": "60990" 00:15:55.170 }, 00:15:55.170 "auth": { 00:15:55.170 "state": "completed", 00:15:55.170 "digest": "sha256", 00:15:55.170 "dhgroup": "ffdhe4096" 00:15:55.170 } 00:15:55.170 } 00:15:55.170 ]' 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.170 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.429 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:55.429 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.366 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.624 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:56.624 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.624 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.624 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.624 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.624 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:56.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.192 00:15:57.192 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.192 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.192 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.451 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.451 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.451 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.451 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.451 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.451 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.451 { 00:15:57.451 "cntlid": 31, 00:15:57.451 "qid": 0, 00:15:57.451 "state": "enabled", 00:15:57.451 "thread": "nvmf_tgt_poll_group_000", 00:15:57.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:57.451 "listen_address": { 00:15:57.451 "trtype": "TCP", 00:15:57.451 "adrfam": "IPv4", 00:15:57.451 "traddr": "10.0.0.2", 00:15:57.451 "trsvcid": "4420" 00:15:57.451 }, 00:15:57.451 "peer_address": { 00:15:57.451 "trtype": "TCP", 00:15:57.451 "adrfam": "IPv4", 00:15:57.451 "traddr": "10.0.0.1", 00:15:57.451 "trsvcid": "32770" 00:15:57.451 }, 00:15:57.451 "auth": { 00:15:57.451 "state": "completed", 00:15:57.451 "digest": "sha256", 00:15:57.451 "dhgroup": "ffdhe4096" 00:15:57.451 } 00:15:57.451 } 00:15:57.451 ]' 00:15:57.451 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.451 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.451 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.451 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.451 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.451 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.451 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.451 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.709 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:57.709 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.644 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.902 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.903 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.468 00:15:59.728 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.728 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.728 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.987 { 00:15:59.987 "cntlid": 33, 00:15:59.987 "qid": 0, 00:15:59.987 "state": "enabled", 00:15:59.987 "thread": "nvmf_tgt_poll_group_000", 00:15:59.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:59.987 "listen_address": { 00:15:59.987 "trtype": "TCP", 00:15:59.987 "adrfam": "IPv4", 00:15:59.987 "traddr": "10.0.0.2", 00:15:59.987 "trsvcid": "4420" 00:15:59.987 }, 00:15:59.987 "peer_address": { 00:15:59.987 "trtype": "TCP", 00:15:59.987 "adrfam": "IPv4", 00:15:59.987 "traddr": "10.0.0.1", 00:15:59.987 "trsvcid": "32804" 00:15:59.987 }, 00:15:59.987 "auth": { 00:15:59.987 "state": "completed", 00:15:59.987 "digest": "sha256", 00:15:59.987 "dhgroup": "ffdhe6144" 00:15:59.987 } 00:15:59.987 } 00:15:59.987 ]' 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.987 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.245 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:00.245 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.179 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.437 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.005 00:16:02.005 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.005 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.005 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.263 { 00:16:02.263 "cntlid": 35, 00:16:02.263 "qid": 0, 00:16:02.263 "state": "enabled", 00:16:02.263 "thread": "nvmf_tgt_poll_group_000", 00:16:02.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:02.263 "listen_address": { 00:16:02.263 "trtype": "TCP", 00:16:02.263 "adrfam": "IPv4", 00:16:02.263 "traddr": "10.0.0.2", 00:16:02.263 "trsvcid": "4420" 00:16:02.263 }, 00:16:02.263 "peer_address": { 00:16:02.263 "trtype": "TCP", 00:16:02.263 "adrfam": "IPv4", 00:16:02.263 "traddr": "10.0.0.1", 00:16:02.263 "trsvcid": "32838" 00:16:02.263 }, 00:16:02.263 "auth": { 00:16:02.263 "state": "completed", 00:16:02.263 "digest": "sha256", 00:16:02.263 "dhgroup": "ffdhe6144" 00:16:02.263 } 00:16:02.263 } 00:16:02.263 ]' 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.263 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.522 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:02.522 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.458 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.714 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.280 00:16:04.280 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.280 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.280 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.538 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.538 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.538 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.538 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.538 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.538 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.538 { 00:16:04.538 "cntlid": 37, 00:16:04.538 "qid": 0, 00:16:04.538 "state": "enabled", 00:16:04.538 "thread": "nvmf_tgt_poll_group_000", 00:16:04.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:04.538 "listen_address": { 00:16:04.538 "trtype": "TCP", 00:16:04.538 "adrfam": "IPv4", 00:16:04.538 "traddr": "10.0.0.2", 00:16:04.538 "trsvcid": "4420" 00:16:04.538 }, 00:16:04.538 "peer_address": { 00:16:04.538 "trtype": "TCP", 00:16:04.538 "adrfam": "IPv4", 00:16:04.538 "traddr": "10.0.0.1", 00:16:04.538 "trsvcid": "32872" 00:16:04.538 }, 00:16:04.538 "auth": { 00:16:04.538 "state": "completed", 00:16:04.538 "digest": "sha256", 00:16:04.538 "dhgroup": "ffdhe6144" 00:16:04.538 } 00:16:04.538 } 00:16:04.538 ]' 00:16:04.538 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.798 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.798 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.798 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.798 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.798 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.798 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.798 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.056 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:05.057 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.993 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.251 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.819 00:16:06.819 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.819 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.819 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.077 { 00:16:07.077 "cntlid": 39, 00:16:07.077 "qid": 0, 00:16:07.077 "state": "enabled", 00:16:07.077 "thread": "nvmf_tgt_poll_group_000", 00:16:07.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:07.077 "listen_address": { 00:16:07.077 "trtype": "TCP", 00:16:07.077 "adrfam": "IPv4", 00:16:07.077 "traddr": "10.0.0.2", 00:16:07.077 "trsvcid": "4420" 00:16:07.077 }, 00:16:07.077 "peer_address": { 00:16:07.077 "trtype": "TCP", 00:16:07.077 "adrfam": "IPv4", 00:16:07.077 "traddr": "10.0.0.1", 00:16:07.077 "trsvcid": "41000" 00:16:07.077 }, 00:16:07.077 "auth": { 00:16:07.077 "state": "completed", 00:16:07.077 "digest": "sha256", 00:16:07.077 "dhgroup": "ffdhe6144" 00:16:07.077 } 00:16:07.077 } 00:16:07.077 ]' 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.077 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.337 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:07.337 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.274 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.533 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:08.533 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.533 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.534 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.534 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.534 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.534 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.534 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.534 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.792 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.792 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.361 00:16:09.620 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.620 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.620 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.879 { 00:16:09.879 "cntlid": 41, 00:16:09.879 "qid": 0, 00:16:09.879 "state": "enabled", 00:16:09.879 "thread": "nvmf_tgt_poll_group_000", 00:16:09.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:09.879 "listen_address": { 00:16:09.879 "trtype": "TCP", 00:16:09.879 "adrfam": "IPv4", 00:16:09.879 "traddr": "10.0.0.2", 00:16:09.879 "trsvcid": "4420" 00:16:09.879 }, 00:16:09.879 "peer_address": { 00:16:09.879 "trtype": "TCP", 00:16:09.879 "adrfam": "IPv4", 00:16:09.879 "traddr": "10.0.0.1", 00:16:09.879 "trsvcid": "41008" 00:16:09.879 }, 00:16:09.879 "auth": { 00:16:09.879 "state": "completed", 00:16:09.879 "digest": "sha256", 00:16:09.879 "dhgroup": "ffdhe8192" 00:16:09.879 } 00:16:09.879 } 00:16:09.879 ]' 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.879 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.138 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:10.138 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.076 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.642 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.643 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.643 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.643 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.643 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.643 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.582 00:16:12.582 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.582 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.582 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.582 { 00:16:12.582 "cntlid": 43, 00:16:12.582 "qid": 0, 00:16:12.582 "state": "enabled", 00:16:12.582 "thread": "nvmf_tgt_poll_group_000", 00:16:12.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:12.582 "listen_address": { 00:16:12.582 "trtype": "TCP", 00:16:12.582 "adrfam": "IPv4", 00:16:12.582 "traddr": "10.0.0.2", 00:16:12.582 "trsvcid": "4420" 00:16:12.582 }, 00:16:12.582 "peer_address": { 00:16:12.582 "trtype": "TCP", 00:16:12.582 "adrfam": "IPv4", 00:16:12.582 "traddr": "10.0.0.1", 00:16:12.582 "trsvcid": "41030" 00:16:12.582 }, 00:16:12.582 "auth": { 00:16:12.582 "state": "completed", 00:16:12.582 "digest": "sha256", 00:16:12.582 "dhgroup": "ffdhe8192" 00:16:12.582 } 00:16:12.582 } 00:16:12.582 ]' 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.582 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.841 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.841 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.841 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.841 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.841 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.099 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:13.099 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.039 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.297 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.236 00:16:15.236 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.236 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.236 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.495 { 00:16:15.495 "cntlid": 45, 00:16:15.495 "qid": 0, 00:16:15.495 "state": "enabled", 00:16:15.495 "thread": "nvmf_tgt_poll_group_000", 00:16:15.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:15.495 "listen_address": { 00:16:15.495 "trtype": "TCP", 00:16:15.495 "adrfam": "IPv4", 00:16:15.495 "traddr": "10.0.0.2", 00:16:15.495 "trsvcid": "4420" 00:16:15.495 }, 00:16:15.495 "peer_address": { 00:16:15.495 "trtype": "TCP", 00:16:15.495 "adrfam": "IPv4", 00:16:15.495 "traddr": "10.0.0.1", 00:16:15.495 "trsvcid": "47962" 00:16:15.495 }, 00:16:15.495 "auth": { 00:16:15.495 "state": "completed", 00:16:15.495 "digest": "sha256", 00:16:15.495 "dhgroup": "ffdhe8192" 00:16:15.495 } 00:16:15.495 } 00:16:15.495 ]' 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.495 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.753 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:15.753 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.689 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.948 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.885 00:16:17.885 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.885 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.885 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.143 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.143 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.144 { 00:16:18.144 "cntlid": 47, 00:16:18.144 "qid": 0, 00:16:18.144 "state": "enabled", 00:16:18.144 "thread": "nvmf_tgt_poll_group_000", 00:16:18.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:18.144 "listen_address": { 00:16:18.144 "trtype": "TCP", 00:16:18.144 "adrfam": "IPv4", 00:16:18.144 "traddr": "10.0.0.2", 00:16:18.144 "trsvcid": "4420" 00:16:18.144 }, 00:16:18.144 "peer_address": { 00:16:18.144 "trtype": "TCP", 00:16:18.144 "adrfam": "IPv4", 00:16:18.144 "traddr": "10.0.0.1", 00:16:18.144 "trsvcid": "47982" 00:16:18.144 }, 00:16:18.144 "auth": { 00:16:18.144 "state": "completed", 00:16:18.144 "digest": "sha256", 00:16:18.144 "dhgroup": "ffdhe8192" 00:16:18.144 } 00:16:18.144 } 00:16:18.144 ]' 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.144 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.402 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.402 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.402 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.661 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:18.661 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.597 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.855 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.115 00:16:20.115 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.115 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.115 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.404 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.404 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.404 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.404 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.404 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.404 { 00:16:20.404 "cntlid": 49, 00:16:20.404 "qid": 0, 00:16:20.404 "state": "enabled", 00:16:20.404 "thread": "nvmf_tgt_poll_group_000", 00:16:20.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:20.404 "listen_address": { 00:16:20.404 "trtype": "TCP", 00:16:20.404 "adrfam": "IPv4", 00:16:20.404 "traddr": "10.0.0.2", 00:16:20.404 "trsvcid": "4420" 00:16:20.404 }, 00:16:20.404 "peer_address": { 00:16:20.404 "trtype": "TCP", 00:16:20.404 "adrfam": "IPv4", 00:16:20.404 "traddr": "10.0.0.1", 00:16:20.404 "trsvcid": "48010" 00:16:20.404 }, 00:16:20.404 "auth": { 00:16:20.404 "state": "completed", 00:16:20.404 "digest": "sha384", 00:16:20.404 "dhgroup": "null" 00:16:20.404 } 00:16:20.404 } 00:16:20.404 ]' 00:16:20.404 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.404 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.404 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.404 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.404 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.687 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.687 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.687 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.687 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:20.687 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.624 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.882 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.451 00:16:22.451 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.451 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.451 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.451 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.451 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.451 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.451 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.709 { 00:16:22.709 "cntlid": 51, 00:16:22.709 "qid": 0, 00:16:22.709 "state": "enabled", 00:16:22.709 "thread": "nvmf_tgt_poll_group_000", 00:16:22.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:22.709 "listen_address": { 00:16:22.709 "trtype": "TCP", 00:16:22.709 "adrfam": "IPv4", 00:16:22.709 "traddr": "10.0.0.2", 00:16:22.709 "trsvcid": "4420" 00:16:22.709 }, 00:16:22.709 "peer_address": { 00:16:22.709 "trtype": "TCP", 00:16:22.709 "adrfam": "IPv4", 00:16:22.709 "traddr": "10.0.0.1", 00:16:22.709 "trsvcid": "48040" 00:16:22.709 }, 00:16:22.709 "auth": { 00:16:22.709 "state": "completed", 00:16:22.709 "digest": "sha384", 00:16:22.709 "dhgroup": "null" 00:16:22.709 } 00:16:22.709 } 00:16:22.709 ]' 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.709 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.967 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:22.967 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.907 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.165 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.166 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.166 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.424 00:16:24.424 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.424 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.424 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.683 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.683 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.683 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.683 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.683 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.683 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.683 { 00:16:24.683 "cntlid": 53, 00:16:24.683 "qid": 0, 00:16:24.683 "state": "enabled", 00:16:24.683 "thread": "nvmf_tgt_poll_group_000", 00:16:24.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:24.683 "listen_address": { 00:16:24.683 "trtype": "TCP", 00:16:24.683 "adrfam": "IPv4", 00:16:24.683 "traddr": "10.0.0.2", 00:16:24.683 "trsvcid": "4420" 00:16:24.683 }, 00:16:24.683 "peer_address": { 00:16:24.683 "trtype": "TCP", 00:16:24.683 "adrfam": "IPv4", 00:16:24.684 "traddr": "10.0.0.1", 00:16:24.684 "trsvcid": "48070" 00:16:24.684 }, 00:16:24.684 "auth": { 00:16:24.684 "state": "completed", 00:16:24.684 "digest": "sha384", 00:16:24.684 "dhgroup": "null" 00:16:24.684 } 00:16:24.684 } 00:16:24.684 ]' 00:16:24.684 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.942 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.942 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.942 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.942 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.942 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.942 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.942 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.199 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:25.199 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.137 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.965 00:16:26.965 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.965 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.965 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.224 { 00:16:27.224 "cntlid": 55, 00:16:27.224 "qid": 0, 00:16:27.224 "state": "enabled", 00:16:27.224 "thread": "nvmf_tgt_poll_group_000", 00:16:27.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:27.224 "listen_address": { 00:16:27.224 "trtype": "TCP", 00:16:27.224 "adrfam": "IPv4", 00:16:27.224 "traddr": "10.0.0.2", 00:16:27.224 "trsvcid": "4420" 00:16:27.224 }, 00:16:27.224 "peer_address": { 00:16:27.224 "trtype": "TCP", 00:16:27.224 "adrfam": "IPv4", 00:16:27.224 "traddr": "10.0.0.1", 00:16:27.224 "trsvcid": "45968" 00:16:27.224 }, 00:16:27.224 "auth": { 00:16:27.224 "state": "completed", 00:16:27.224 "digest": "sha384", 00:16:27.224 "dhgroup": "null" 00:16:27.224 } 00:16:27.224 } 00:16:27.224 ]' 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.224 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.483 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:27.483 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:28.414 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.414 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.414 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.414 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.414 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.415 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.415 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.415 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.415 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.674 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.933 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.933 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.933 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.933 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.190 00:16:29.190 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.190 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.190 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.449 { 00:16:29.449 "cntlid": 57, 00:16:29.449 "qid": 0, 00:16:29.449 "state": "enabled", 00:16:29.449 "thread": "nvmf_tgt_poll_group_000", 00:16:29.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:29.449 "listen_address": { 00:16:29.449 "trtype": "TCP", 00:16:29.449 "adrfam": "IPv4", 00:16:29.449 "traddr": "10.0.0.2", 00:16:29.449 "trsvcid": "4420" 00:16:29.449 }, 00:16:29.449 "peer_address": { 00:16:29.449 "trtype": "TCP", 00:16:29.449 "adrfam": "IPv4", 00:16:29.449 "traddr": "10.0.0.1", 00:16:29.449 "trsvcid": "46006" 00:16:29.449 }, 00:16:29.449 "auth": { 00:16:29.449 "state": "completed", 00:16:29.449 "digest": "sha384", 00:16:29.449 "dhgroup": "ffdhe2048" 00:16:29.449 } 00:16:29.449 } 00:16:29.449 ]' 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.449 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.016 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:30.016 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.953 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.213 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.471 00:16:31.471 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.471 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.471 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.729 { 00:16:31.729 "cntlid": 59, 00:16:31.729 "qid": 0, 00:16:31.729 "state": "enabled", 00:16:31.729 "thread": "nvmf_tgt_poll_group_000", 00:16:31.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:31.729 "listen_address": { 00:16:31.729 "trtype": "TCP", 00:16:31.729 "adrfam": "IPv4", 00:16:31.729 "traddr": "10.0.0.2", 00:16:31.729 "trsvcid": "4420" 00:16:31.729 }, 00:16:31.729 "peer_address": { 00:16:31.729 "trtype": "TCP", 00:16:31.729 "adrfam": "IPv4", 00:16:31.729 "traddr": "10.0.0.1", 00:16:31.729 "trsvcid": "46020" 00:16:31.729 }, 00:16:31.729 "auth": { 00:16:31.729 "state": "completed", 00:16:31.729 "digest": "sha384", 00:16:31.729 "dhgroup": "ffdhe2048" 00:16:31.729 } 00:16:31.729 } 00:16:31.729 ]' 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.729 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.296 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:32.296 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.231 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.797 00:16:33.797 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.797 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.797 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.056 { 00:16:34.056 "cntlid": 61, 00:16:34.056 "qid": 0, 00:16:34.056 "state": "enabled", 00:16:34.056 "thread": "nvmf_tgt_poll_group_000", 00:16:34.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.056 "listen_address": { 00:16:34.056 "trtype": "TCP", 00:16:34.056 "adrfam": "IPv4", 00:16:34.056 "traddr": "10.0.0.2", 00:16:34.056 "trsvcid": "4420" 00:16:34.056 }, 00:16:34.056 "peer_address": { 00:16:34.056 "trtype": "TCP", 00:16:34.056 "adrfam": "IPv4", 00:16:34.056 "traddr": "10.0.0.1", 00:16:34.056 "trsvcid": "46044" 00:16:34.056 }, 00:16:34.056 "auth": { 00:16:34.056 "state": "completed", 00:16:34.056 "digest": "sha384", 00:16:34.056 "dhgroup": "ffdhe2048" 00:16:34.056 } 00:16:34.056 } 00:16:34.056 ]' 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.056 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.315 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:34.315 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.251 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.509 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.768 00:16:35.768 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.768 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.768 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.338 { 00:16:36.338 "cntlid": 63, 00:16:36.338 "qid": 0, 00:16:36.338 "state": "enabled", 00:16:36.338 "thread": "nvmf_tgt_poll_group_000", 00:16:36.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:36.338 "listen_address": { 00:16:36.338 "trtype": "TCP", 00:16:36.338 "adrfam": "IPv4", 00:16:36.338 "traddr": "10.0.0.2", 00:16:36.338 "trsvcid": "4420" 00:16:36.338 }, 00:16:36.338 "peer_address": { 00:16:36.338 "trtype": "TCP", 00:16:36.338 "adrfam": "IPv4", 00:16:36.338 "traddr": "10.0.0.1", 00:16:36.338 "trsvcid": "36434" 00:16:36.338 }, 00:16:36.338 "auth": { 00:16:36.338 "state": "completed", 00:16:36.338 "digest": "sha384", 00:16:36.338 "dhgroup": "ffdhe2048" 00:16:36.338 } 00:16:36.338 } 00:16:36.338 ]' 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.338 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.596 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:36.596 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.529 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.787 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.045 00:16:38.045 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.045 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.045 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.302 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.302 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.302 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.302 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.302 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.302 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.302 { 00:16:38.302 "cntlid": 65, 00:16:38.302 "qid": 0, 00:16:38.302 "state": "enabled", 00:16:38.302 "thread": "nvmf_tgt_poll_group_000", 00:16:38.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:38.302 "listen_address": { 00:16:38.302 "trtype": "TCP", 00:16:38.302 "adrfam": "IPv4", 00:16:38.302 "traddr": "10.0.0.2", 00:16:38.302 "trsvcid": "4420" 00:16:38.302 }, 00:16:38.302 "peer_address": { 00:16:38.302 "trtype": "TCP", 00:16:38.302 "adrfam": "IPv4", 00:16:38.302 "traddr": "10.0.0.1", 00:16:38.302 "trsvcid": "36448" 00:16:38.302 }, 00:16:38.302 "auth": { 00:16:38.302 "state": "completed", 00:16:38.302 "digest": "sha384", 00:16:38.302 "dhgroup": "ffdhe3072" 00:16:38.302 } 00:16:38.302 } 00:16:38.302 ]' 00:16:38.303 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.303 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.303 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.560 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.560 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.560 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.560 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.560 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.818 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:38.818 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:39.751 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.009 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.267 00:16:40.267 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.267 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.267 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.526 { 00:16:40.526 "cntlid": 67, 00:16:40.526 "qid": 0, 00:16:40.526 "state": "enabled", 00:16:40.526 "thread": "nvmf_tgt_poll_group_000", 00:16:40.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:40.526 "listen_address": { 00:16:40.526 "trtype": "TCP", 00:16:40.526 "adrfam": "IPv4", 00:16:40.526 "traddr": "10.0.0.2", 00:16:40.526 "trsvcid": "4420" 00:16:40.526 }, 00:16:40.526 "peer_address": { 00:16:40.526 "trtype": "TCP", 00:16:40.526 "adrfam": "IPv4", 00:16:40.526 "traddr": "10.0.0.1", 00:16:40.526 "trsvcid": "36474" 00:16:40.526 }, 00:16:40.526 "auth": { 00:16:40.526 "state": "completed", 00:16:40.526 "digest": "sha384", 00:16:40.526 "dhgroup": "ffdhe3072" 00:16:40.526 } 00:16:40.526 } 00:16:40.526 ]' 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.526 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.783 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.783 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.783 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.041 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:41.041 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.974 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.232 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.490 00:16:42.491 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.491 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.491 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.749 { 00:16:42.749 "cntlid": 69, 00:16:42.749 "qid": 0, 00:16:42.749 "state": "enabled", 00:16:42.749 "thread": "nvmf_tgt_poll_group_000", 00:16:42.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:42.749 "listen_address": { 00:16:42.749 "trtype": "TCP", 00:16:42.749 "adrfam": "IPv4", 00:16:42.749 "traddr": "10.0.0.2", 00:16:42.749 "trsvcid": "4420" 00:16:42.749 }, 00:16:42.749 "peer_address": { 00:16:42.749 "trtype": "TCP", 00:16:42.749 "adrfam": "IPv4", 00:16:42.749 "traddr": "10.0.0.1", 00:16:42.749 "trsvcid": "36502" 00:16:42.749 }, 00:16:42.749 "auth": { 00:16:42.749 "state": "completed", 00:16:42.749 "digest": "sha384", 00:16:42.749 "dhgroup": "ffdhe3072" 00:16:42.749 } 00:16:42.749 } 00:16:42.749 ]' 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.749 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.006 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.006 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.006 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.006 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.006 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.264 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:43.264 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.197 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.454 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.711 00:16:44.711 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.711 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.711 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.306 { 00:16:45.306 "cntlid": 71, 00:16:45.306 "qid": 0, 00:16:45.306 "state": "enabled", 00:16:45.306 "thread": "nvmf_tgt_poll_group_000", 00:16:45.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:45.306 "listen_address": { 00:16:45.306 "trtype": "TCP", 00:16:45.306 "adrfam": "IPv4", 00:16:45.306 "traddr": "10.0.0.2", 00:16:45.306 "trsvcid": "4420" 00:16:45.306 }, 00:16:45.306 "peer_address": { 00:16:45.306 "trtype": "TCP", 00:16:45.306 "adrfam": "IPv4", 00:16:45.306 "traddr": "10.0.0.1", 00:16:45.306 "trsvcid": "44234" 00:16:45.306 }, 00:16:45.306 "auth": { 00:16:45.306 "state": "completed", 00:16:45.306 "digest": "sha384", 00:16:45.306 "dhgroup": "ffdhe3072" 00:16:45.306 } 00:16:45.306 } 00:16:45.306 ]' 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.306 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.563 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:45.564 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.496 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.755 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.013 00:16:47.270 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.270 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.270 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.528 { 00:16:47.528 "cntlid": 73, 00:16:47.528 "qid": 0, 00:16:47.528 "state": "enabled", 00:16:47.528 "thread": "nvmf_tgt_poll_group_000", 00:16:47.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:47.528 "listen_address": { 00:16:47.528 "trtype": "TCP", 00:16:47.528 "adrfam": "IPv4", 00:16:47.528 "traddr": "10.0.0.2", 00:16:47.528 "trsvcid": "4420" 00:16:47.528 }, 00:16:47.528 "peer_address": { 00:16:47.528 "trtype": "TCP", 00:16:47.528 "adrfam": "IPv4", 00:16:47.528 "traddr": "10.0.0.1", 00:16:47.528 "trsvcid": "44260" 00:16:47.528 }, 00:16:47.528 "auth": { 00:16:47.528 "state": "completed", 00:16:47.528 "digest": "sha384", 00:16:47.528 "dhgroup": "ffdhe4096" 00:16:47.528 } 00:16:47.528 } 00:16:47.528 ]' 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.528 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.785 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:47.785 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.719 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.977 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.542 00:16:49.542 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.542 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.542 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.799 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.799 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.799 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.799 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.799 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.800 { 00:16:49.800 "cntlid": 75, 00:16:49.800 "qid": 0, 00:16:49.800 "state": "enabled", 00:16:49.800 "thread": "nvmf_tgt_poll_group_000", 00:16:49.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.800 "listen_address": { 00:16:49.800 "trtype": "TCP", 00:16:49.800 "adrfam": "IPv4", 00:16:49.800 "traddr": "10.0.0.2", 00:16:49.800 "trsvcid": "4420" 00:16:49.800 }, 00:16:49.800 "peer_address": { 00:16:49.800 "trtype": "TCP", 00:16:49.800 "adrfam": "IPv4", 00:16:49.800 "traddr": "10.0.0.1", 00:16:49.800 "trsvcid": "44280" 00:16:49.800 }, 00:16:49.800 "auth": { 00:16:49.800 "state": "completed", 00:16:49.800 "digest": "sha384", 00:16:49.800 "dhgroup": "ffdhe4096" 00:16:49.800 } 00:16:49.800 } 00:16:49.800 ]' 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.800 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.364 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:50.365 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.297 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.862 00:16:51.862 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.862 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.862 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.119 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.119 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.119 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.119 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.120 { 00:16:52.120 "cntlid": 77, 00:16:52.120 "qid": 0, 00:16:52.120 "state": "enabled", 00:16:52.120 "thread": "nvmf_tgt_poll_group_000", 00:16:52.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:52.120 "listen_address": { 00:16:52.120 "trtype": "TCP", 00:16:52.120 "adrfam": "IPv4", 00:16:52.120 "traddr": "10.0.0.2", 00:16:52.120 "trsvcid": "4420" 00:16:52.120 }, 00:16:52.120 "peer_address": { 00:16:52.120 "trtype": "TCP", 00:16:52.120 "adrfam": "IPv4", 00:16:52.120 "traddr": "10.0.0.1", 00:16:52.120 "trsvcid": "44294" 00:16:52.120 }, 00:16:52.120 "auth": { 00:16:52.120 "state": "completed", 00:16:52.120 "digest": "sha384", 00:16:52.120 "dhgroup": "ffdhe4096" 00:16:52.120 } 00:16:52.120 } 00:16:52.120 ]' 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.120 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.685 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:52.685 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.619 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.183 00:16:54.183 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.183 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.183 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.441 { 00:16:54.441 "cntlid": 79, 00:16:54.441 "qid": 0, 00:16:54.441 "state": "enabled", 00:16:54.441 "thread": "nvmf_tgt_poll_group_000", 00:16:54.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:54.441 "listen_address": { 00:16:54.441 "trtype": "TCP", 00:16:54.441 "adrfam": "IPv4", 00:16:54.441 "traddr": "10.0.0.2", 00:16:54.441 "trsvcid": "4420" 00:16:54.441 }, 00:16:54.441 "peer_address": { 00:16:54.441 "trtype": "TCP", 00:16:54.441 "adrfam": "IPv4", 00:16:54.441 "traddr": "10.0.0.1", 00:16:54.441 "trsvcid": "44336" 00:16:54.441 }, 00:16:54.441 "auth": { 00:16:54.441 "state": "completed", 00:16:54.441 "digest": "sha384", 00:16:54.441 "dhgroup": "ffdhe4096" 00:16:54.441 } 00:16:54.441 } 00:16:54.441 ]' 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.441 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.699 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:54.699 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.632 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.197 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.198 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.198 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.198 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.198 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.455 00:16:56.455 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.455 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.455 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.712 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.712 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.712 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.712 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.970 { 00:16:56.970 "cntlid": 81, 00:16:56.970 "qid": 0, 00:16:56.970 "state": "enabled", 00:16:56.970 "thread": "nvmf_tgt_poll_group_000", 00:16:56.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:56.970 "listen_address": { 00:16:56.970 "trtype": "TCP", 00:16:56.970 "adrfam": "IPv4", 00:16:56.970 "traddr": "10.0.0.2", 00:16:56.970 "trsvcid": "4420" 00:16:56.970 }, 00:16:56.970 "peer_address": { 00:16:56.970 "trtype": "TCP", 00:16:56.970 "adrfam": "IPv4", 00:16:56.970 "traddr": "10.0.0.1", 00:16:56.970 "trsvcid": "52794" 00:16:56.970 }, 00:16:56.970 "auth": { 00:16:56.970 "state": "completed", 00:16:56.970 "digest": "sha384", 00:16:56.970 "dhgroup": "ffdhe6144" 00:16:56.970 } 00:16:56.970 } 00:16:56.970 ]' 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.970 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.227 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:57.227 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:58.159 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.417 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.982 00:16:58.982 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.982 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.982 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.547 { 00:16:59.547 "cntlid": 83, 00:16:59.547 "qid": 0, 00:16:59.547 "state": "enabled", 00:16:59.547 "thread": "nvmf_tgt_poll_group_000", 00:16:59.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:59.547 "listen_address": { 00:16:59.547 "trtype": "TCP", 00:16:59.547 "adrfam": "IPv4", 00:16:59.547 "traddr": "10.0.0.2", 00:16:59.547 "trsvcid": "4420" 00:16:59.547 }, 00:16:59.547 "peer_address": { 00:16:59.547 "trtype": "TCP", 00:16:59.547 "adrfam": "IPv4", 00:16:59.547 "traddr": "10.0.0.1", 00:16:59.547 "trsvcid": "52830" 00:16:59.547 }, 00:16:59.547 "auth": { 00:16:59.547 "state": "completed", 00:16:59.547 "digest": "sha384", 00:16:59.547 "dhgroup": "ffdhe6144" 00:16:59.547 } 00:16:59.547 } 00:16:59.547 ]' 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.547 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.805 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:16:59.805 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.738 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.995 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.561 00:17:01.561 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.561 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.561 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.818 { 00:17:01.818 "cntlid": 85, 00:17:01.818 "qid": 0, 00:17:01.818 "state": "enabled", 00:17:01.818 "thread": "nvmf_tgt_poll_group_000", 00:17:01.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:01.818 "listen_address": { 00:17:01.818 "trtype": "TCP", 00:17:01.818 "adrfam": "IPv4", 00:17:01.818 "traddr": "10.0.0.2", 00:17:01.818 "trsvcid": "4420" 00:17:01.818 }, 00:17:01.818 "peer_address": { 00:17:01.818 "trtype": "TCP", 00:17:01.818 "adrfam": "IPv4", 00:17:01.818 "traddr": "10.0.0.1", 00:17:01.818 "trsvcid": "52862" 00:17:01.818 }, 00:17:01.818 "auth": { 00:17:01.818 "state": "completed", 00:17:01.818 "digest": "sha384", 00:17:01.818 "dhgroup": "ffdhe6144" 00:17:01.818 } 00:17:01.818 } 00:17:01.818 ]' 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.818 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.076 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.076 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.076 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.334 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:02.334 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.266 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.524 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.089 00:17:04.089 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.089 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.089 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.346 { 00:17:04.346 "cntlid": 87, 00:17:04.346 "qid": 0, 00:17:04.346 "state": "enabled", 00:17:04.346 "thread": "nvmf_tgt_poll_group_000", 00:17:04.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:04.346 "listen_address": { 00:17:04.346 "trtype": "TCP", 00:17:04.346 "adrfam": "IPv4", 00:17:04.346 "traddr": "10.0.0.2", 00:17:04.346 "trsvcid": "4420" 00:17:04.346 }, 00:17:04.346 "peer_address": { 00:17:04.346 "trtype": "TCP", 00:17:04.346 "adrfam": "IPv4", 00:17:04.346 "traddr": "10.0.0.1", 00:17:04.346 "trsvcid": "52882" 00:17:04.346 }, 00:17:04.346 "auth": { 00:17:04.346 "state": "completed", 00:17:04.346 "digest": "sha384", 00:17:04.346 "dhgroup": "ffdhe6144" 00:17:04.346 } 00:17:04.346 } 00:17:04.346 ]' 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.346 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.604 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:04.604 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.536 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.793 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.725 00:17:06.725 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.725 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.725 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.983 { 00:17:06.983 "cntlid": 89, 00:17:06.983 "qid": 0, 00:17:06.983 "state": "enabled", 00:17:06.983 "thread": "nvmf_tgt_poll_group_000", 00:17:06.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:06.983 "listen_address": { 00:17:06.983 "trtype": "TCP", 00:17:06.983 "adrfam": "IPv4", 00:17:06.983 "traddr": "10.0.0.2", 00:17:06.983 "trsvcid": "4420" 00:17:06.983 }, 00:17:06.983 "peer_address": { 00:17:06.983 "trtype": "TCP", 00:17:06.983 "adrfam": "IPv4", 00:17:06.983 "traddr": "10.0.0.1", 00:17:06.983 "trsvcid": "57362" 00:17:06.983 }, 00:17:06.983 "auth": { 00:17:06.983 "state": "completed", 00:17:06.983 "digest": "sha384", 00:17:06.983 "dhgroup": "ffdhe8192" 00:17:06.983 } 00:17:06.983 } 00:17:06.983 ]' 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.983 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.240 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:07.240 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:08.174 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.432 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.365 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.622 { 00:17:09.622 "cntlid": 91, 00:17:09.622 "qid": 0, 00:17:09.622 "state": "enabled", 00:17:09.622 "thread": "nvmf_tgt_poll_group_000", 00:17:09.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:09.622 "listen_address": { 00:17:09.622 "trtype": "TCP", 00:17:09.622 "adrfam": "IPv4", 00:17:09.622 "traddr": "10.0.0.2", 00:17:09.622 "trsvcid": "4420" 00:17:09.622 }, 00:17:09.622 "peer_address": { 00:17:09.622 "trtype": "TCP", 00:17:09.622 "adrfam": "IPv4", 00:17:09.622 "traddr": "10.0.0.1", 00:17:09.622 "trsvcid": "57388" 00:17:09.622 }, 00:17:09.622 "auth": { 00:17:09.622 "state": "completed", 00:17:09.622 "digest": "sha384", 00:17:09.622 "dhgroup": "ffdhe8192" 00:17:09.622 } 00:17:09.622 } 00:17:09.622 ]' 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.622 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.623 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.623 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.880 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.880 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.880 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.148 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:10.148 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.144 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.145 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.145 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.078 00:17:12.078 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.078 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.078 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.337 { 00:17:12.337 "cntlid": 93, 00:17:12.337 "qid": 0, 00:17:12.337 "state": "enabled", 00:17:12.337 "thread": "nvmf_tgt_poll_group_000", 00:17:12.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:12.337 "listen_address": { 00:17:12.337 "trtype": "TCP", 00:17:12.337 "adrfam": "IPv4", 00:17:12.337 "traddr": "10.0.0.2", 00:17:12.337 "trsvcid": "4420" 00:17:12.337 }, 00:17:12.337 "peer_address": { 00:17:12.337 "trtype": "TCP", 00:17:12.337 "adrfam": "IPv4", 00:17:12.337 "traddr": "10.0.0.1", 00:17:12.337 "trsvcid": "57428" 00:17:12.337 }, 00:17:12.337 "auth": { 00:17:12.337 "state": "completed", 00:17:12.337 "digest": "sha384", 00:17:12.337 "dhgroup": "ffdhe8192" 00:17:12.337 } 00:17:12.337 } 00:17:12.337 ]' 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.337 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.337 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.337 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.337 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.337 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.337 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.901 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:12.901 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.833 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.765 00:17:14.765 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.765 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.765 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.023 { 00:17:15.023 "cntlid": 95, 00:17:15.023 "qid": 0, 00:17:15.023 "state": "enabled", 00:17:15.023 "thread": "nvmf_tgt_poll_group_000", 00:17:15.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.023 "listen_address": { 00:17:15.023 "trtype": "TCP", 00:17:15.023 "adrfam": "IPv4", 00:17:15.023 "traddr": "10.0.0.2", 00:17:15.023 "trsvcid": "4420" 00:17:15.023 }, 00:17:15.023 "peer_address": { 00:17:15.023 "trtype": "TCP", 00:17:15.023 "adrfam": "IPv4", 00:17:15.023 "traddr": "10.0.0.1", 00:17:15.023 "trsvcid": "57452" 00:17:15.023 }, 00:17:15.023 "auth": { 00:17:15.023 "state": "completed", 00:17:15.023 "digest": "sha384", 00:17:15.023 "dhgroup": "ffdhe8192" 00:17:15.023 } 00:17:15.023 } 00:17:15.023 ]' 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.023 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.281 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.281 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.281 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.538 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:15.538 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:16.471 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.471 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.037 00:17:17.037 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.037 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.037 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.294 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.294 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.294 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.294 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.295 { 00:17:17.295 "cntlid": 97, 00:17:17.295 "qid": 0, 00:17:17.295 "state": "enabled", 00:17:17.295 "thread": "nvmf_tgt_poll_group_000", 00:17:17.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:17.295 "listen_address": { 00:17:17.295 "trtype": "TCP", 00:17:17.295 "adrfam": "IPv4", 00:17:17.295 "traddr": "10.0.0.2", 00:17:17.295 "trsvcid": "4420" 00:17:17.295 }, 00:17:17.295 "peer_address": { 00:17:17.295 "trtype": "TCP", 00:17:17.295 "adrfam": "IPv4", 00:17:17.295 "traddr": "10.0.0.1", 00:17:17.295 "trsvcid": "47816" 00:17:17.295 }, 00:17:17.295 "auth": { 00:17:17.295 "state": "completed", 00:17:17.295 "digest": "sha512", 00:17:17.295 "dhgroup": "null" 00:17:17.295 } 00:17:17.295 } 00:17:17.295 ]' 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.295 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.559 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:17.559 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:18.495 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.752 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.010 00:17:19.267 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.267 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.267 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.525 { 00:17:19.525 "cntlid": 99, 00:17:19.525 "qid": 0, 00:17:19.525 "state": "enabled", 00:17:19.525 "thread": "nvmf_tgt_poll_group_000", 00:17:19.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:19.525 "listen_address": { 00:17:19.525 "trtype": "TCP", 00:17:19.525 "adrfam": "IPv4", 00:17:19.525 "traddr": "10.0.0.2", 00:17:19.525 "trsvcid": "4420" 00:17:19.525 }, 00:17:19.525 "peer_address": { 00:17:19.525 "trtype": "TCP", 00:17:19.525 "adrfam": "IPv4", 00:17:19.525 "traddr": "10.0.0.1", 00:17:19.525 "trsvcid": "47848" 00:17:19.525 }, 00:17:19.525 "auth": { 00:17:19.525 "state": "completed", 00:17:19.525 "digest": "sha512", 00:17:19.525 "dhgroup": "null" 00:17:19.525 } 00:17:19.525 } 00:17:19.525 ]' 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.525 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.783 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:19.783 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:20.715 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.973 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.230 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.230 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.230 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.231 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.488 00:17:21.488 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.488 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.488 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.745 { 00:17:21.745 "cntlid": 101, 00:17:21.745 "qid": 0, 00:17:21.745 "state": "enabled", 00:17:21.745 "thread": "nvmf_tgt_poll_group_000", 00:17:21.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:21.745 "listen_address": { 00:17:21.745 "trtype": "TCP", 00:17:21.745 "adrfam": "IPv4", 00:17:21.745 "traddr": "10.0.0.2", 00:17:21.745 "trsvcid": "4420" 00:17:21.745 }, 00:17:21.745 "peer_address": { 00:17:21.745 "trtype": "TCP", 00:17:21.745 "adrfam": "IPv4", 00:17:21.745 "traddr": "10.0.0.1", 00:17:21.745 "trsvcid": "47856" 00:17:21.745 }, 00:17:21.745 "auth": { 00:17:21.745 "state": "completed", 00:17:21.745 "digest": "sha512", 00:17:21.745 "dhgroup": "null" 00:17:21.745 } 00:17:21.745 } 00:17:21.745 ]' 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.745 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.746 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:21.746 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.746 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.746 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.746 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.311 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:22.311 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.244 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.809 00:17:23.809 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.809 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.809 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.067 { 00:17:24.067 "cntlid": 103, 00:17:24.067 "qid": 0, 00:17:24.067 "state": "enabled", 00:17:24.067 "thread": "nvmf_tgt_poll_group_000", 00:17:24.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:24.067 "listen_address": { 00:17:24.067 "trtype": "TCP", 00:17:24.067 "adrfam": "IPv4", 00:17:24.067 "traddr": "10.0.0.2", 00:17:24.067 "trsvcid": "4420" 00:17:24.067 }, 00:17:24.067 "peer_address": { 00:17:24.067 "trtype": "TCP", 00:17:24.067 "adrfam": "IPv4", 00:17:24.067 "traddr": "10.0.0.1", 00:17:24.067 "trsvcid": "47876" 00:17:24.067 }, 00:17:24.067 "auth": { 00:17:24.067 "state": "completed", 00:17:24.067 "digest": "sha512", 00:17:24.067 "dhgroup": "null" 00:17:24.067 } 00:17:24.067 } 00:17:24.067 ]' 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.067 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.324 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:24.324 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.257 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.514 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.080 00:17:26.080 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.080 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.080 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.338 { 00:17:26.338 "cntlid": 105, 00:17:26.338 "qid": 0, 00:17:26.338 "state": "enabled", 00:17:26.338 "thread": "nvmf_tgt_poll_group_000", 00:17:26.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:26.338 "listen_address": { 00:17:26.338 "trtype": "TCP", 00:17:26.338 "adrfam": "IPv4", 00:17:26.338 "traddr": "10.0.0.2", 00:17:26.338 "trsvcid": "4420" 00:17:26.338 }, 00:17:26.338 "peer_address": { 00:17:26.338 "trtype": "TCP", 00:17:26.338 "adrfam": "IPv4", 00:17:26.338 "traddr": "10.0.0.1", 00:17:26.338 "trsvcid": "49984" 00:17:26.338 }, 00:17:26.338 "auth": { 00:17:26.338 "state": "completed", 00:17:26.338 "digest": "sha512", 00:17:26.338 "dhgroup": "ffdhe2048" 00:17:26.338 } 00:17:26.338 } 00:17:26.338 ]' 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.338 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.596 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:26.596 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.527 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.784 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.041 00:17:28.298 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.298 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.298 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.556 { 00:17:28.556 "cntlid": 107, 00:17:28.556 "qid": 0, 00:17:28.556 "state": "enabled", 00:17:28.556 "thread": "nvmf_tgt_poll_group_000", 00:17:28.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:28.556 "listen_address": { 00:17:28.556 "trtype": "TCP", 00:17:28.556 "adrfam": "IPv4", 00:17:28.556 "traddr": "10.0.0.2", 00:17:28.556 "trsvcid": "4420" 00:17:28.556 }, 00:17:28.556 "peer_address": { 00:17:28.556 "trtype": "TCP", 00:17:28.556 "adrfam": "IPv4", 00:17:28.556 "traddr": "10.0.0.1", 00:17:28.556 "trsvcid": "50014" 00:17:28.556 }, 00:17:28.556 "auth": { 00:17:28.556 "state": "completed", 00:17:28.556 "digest": "sha512", 00:17:28.556 "dhgroup": "ffdhe2048" 00:17:28.556 } 00:17:28.556 } 00:17:28.556 ]' 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.556 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.822 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:28.822 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.752 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.010 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.575 00:17:30.575 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.575 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.575 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.832 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.832 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.832 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.832 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.832 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.832 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.832 { 00:17:30.832 "cntlid": 109, 00:17:30.832 "qid": 0, 00:17:30.832 "state": "enabled", 00:17:30.832 "thread": "nvmf_tgt_poll_group_000", 00:17:30.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:30.832 "listen_address": { 00:17:30.832 "trtype": "TCP", 00:17:30.832 "adrfam": "IPv4", 00:17:30.832 "traddr": "10.0.0.2", 00:17:30.833 "trsvcid": "4420" 00:17:30.833 }, 00:17:30.833 "peer_address": { 00:17:30.833 "trtype": "TCP", 00:17:30.833 "adrfam": "IPv4", 00:17:30.833 "traddr": "10.0.0.1", 00:17:30.833 "trsvcid": "50040" 00:17:30.833 }, 00:17:30.833 "auth": { 00:17:30.833 "state": "completed", 00:17:30.833 "digest": "sha512", 00:17:30.833 "dhgroup": "ffdhe2048" 00:17:30.833 } 00:17:30.833 } 00:17:30.833 ]' 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.833 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.090 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:31.090 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.023 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.281 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.846 00:17:32.846 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.846 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.846 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.103 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.104 { 00:17:33.104 "cntlid": 111, 00:17:33.104 "qid": 0, 00:17:33.104 "state": "enabled", 00:17:33.104 "thread": "nvmf_tgt_poll_group_000", 00:17:33.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:33.104 "listen_address": { 00:17:33.104 "trtype": "TCP", 00:17:33.104 "adrfam": "IPv4", 00:17:33.104 "traddr": "10.0.0.2", 00:17:33.104 "trsvcid": "4420" 00:17:33.104 }, 00:17:33.104 "peer_address": { 00:17:33.104 "trtype": "TCP", 00:17:33.104 "adrfam": "IPv4", 00:17:33.104 "traddr": "10.0.0.1", 00:17:33.104 "trsvcid": "50068" 00:17:33.104 }, 00:17:33.104 "auth": { 00:17:33.104 "state": "completed", 00:17:33.104 "digest": "sha512", 00:17:33.104 "dhgroup": "ffdhe2048" 00:17:33.104 } 00:17:33.104 } 00:17:33.104 ]' 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.104 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.361 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:33.361 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.292 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.549 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.113 00:17:35.113 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.113 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.113 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.371 { 00:17:35.371 "cntlid": 113, 00:17:35.371 "qid": 0, 00:17:35.371 "state": "enabled", 00:17:35.371 "thread": "nvmf_tgt_poll_group_000", 00:17:35.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:35.371 "listen_address": { 00:17:35.371 "trtype": "TCP", 00:17:35.371 "adrfam": "IPv4", 00:17:35.371 "traddr": "10.0.0.2", 00:17:35.371 "trsvcid": "4420" 00:17:35.371 }, 00:17:35.371 "peer_address": { 00:17:35.371 "trtype": "TCP", 00:17:35.371 "adrfam": "IPv4", 00:17:35.371 "traddr": "10.0.0.1", 00:17:35.371 "trsvcid": "44238" 00:17:35.371 }, 00:17:35.371 "auth": { 00:17:35.371 "state": "completed", 00:17:35.371 "digest": "sha512", 00:17:35.371 "dhgroup": "ffdhe3072" 00:17:35.371 } 00:17:35.371 } 00:17:35.371 ]' 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.371 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.371 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.371 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.371 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.371 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.371 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.655 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:35.655 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.616 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.873 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.437 00:17:37.437 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.437 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.437 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.437 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.437 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.437 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.438 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.438 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.438 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.438 { 00:17:37.438 "cntlid": 115, 00:17:37.438 "qid": 0, 00:17:37.438 "state": "enabled", 00:17:37.438 "thread": "nvmf_tgt_poll_group_000", 00:17:37.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:37.438 "listen_address": { 00:17:37.438 "trtype": "TCP", 00:17:37.438 "adrfam": "IPv4", 00:17:37.438 "traddr": "10.0.0.2", 00:17:37.438 "trsvcid": "4420" 00:17:37.438 }, 00:17:37.438 "peer_address": { 00:17:37.438 "trtype": "TCP", 00:17:37.438 "adrfam": "IPv4", 00:17:37.438 "traddr": "10.0.0.1", 00:17:37.438 "trsvcid": "44256" 00:17:37.438 }, 00:17:37.438 "auth": { 00:17:37.438 "state": "completed", 00:17:37.438 "digest": "sha512", 00:17:37.438 "dhgroup": "ffdhe3072" 00:17:37.438 } 00:17:37.438 } 00:17:37.438 ]' 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.695 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.952 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:37.952 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.885 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.142 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.708 00:17:39.708 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.708 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.708 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.966 { 00:17:39.966 "cntlid": 117, 00:17:39.966 "qid": 0, 00:17:39.966 "state": "enabled", 00:17:39.966 "thread": "nvmf_tgt_poll_group_000", 00:17:39.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:39.966 "listen_address": { 00:17:39.966 "trtype": "TCP", 00:17:39.966 "adrfam": "IPv4", 00:17:39.966 "traddr": "10.0.0.2", 00:17:39.966 "trsvcid": "4420" 00:17:39.966 }, 00:17:39.966 "peer_address": { 00:17:39.966 "trtype": "TCP", 00:17:39.966 "adrfam": "IPv4", 00:17:39.966 "traddr": "10.0.0.1", 00:17:39.966 "trsvcid": "44290" 00:17:39.966 }, 00:17:39.966 "auth": { 00:17:39.966 "state": "completed", 00:17:39.966 "digest": "sha512", 00:17:39.966 "dhgroup": "ffdhe3072" 00:17:39.966 } 00:17:39.966 } 00:17:39.966 ]' 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.966 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.224 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:40.224 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.157 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.415 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.980 00:17:41.980 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.980 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.980 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.981 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.981 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.981 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.981 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.240 { 00:17:42.240 "cntlid": 119, 00:17:42.240 "qid": 0, 00:17:42.240 "state": "enabled", 00:17:42.240 "thread": "nvmf_tgt_poll_group_000", 00:17:42.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:42.240 "listen_address": { 00:17:42.240 "trtype": "TCP", 00:17:42.240 "adrfam": "IPv4", 00:17:42.240 "traddr": "10.0.0.2", 00:17:42.240 "trsvcid": "4420" 00:17:42.240 }, 00:17:42.240 "peer_address": { 00:17:42.240 "trtype": "TCP", 00:17:42.240 "adrfam": "IPv4", 00:17:42.240 "traddr": "10.0.0.1", 00:17:42.240 "trsvcid": "44308" 00:17:42.240 }, 00:17:42.240 "auth": { 00:17:42.240 "state": "completed", 00:17:42.240 "digest": "sha512", 00:17:42.240 "dhgroup": "ffdhe3072" 00:17:42.240 } 00:17:42.240 } 00:17:42.240 ]' 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.240 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.241 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.499 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:42.499 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:43.432 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.432 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.691 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.948 00:17:44.207 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.207 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.207 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.464 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.464 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.464 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.464 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.464 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.464 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.464 { 00:17:44.464 "cntlid": 121, 00:17:44.464 "qid": 0, 00:17:44.464 "state": "enabled", 00:17:44.464 "thread": "nvmf_tgt_poll_group_000", 00:17:44.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:44.464 "listen_address": { 00:17:44.464 "trtype": "TCP", 00:17:44.464 "adrfam": "IPv4", 00:17:44.464 "traddr": "10.0.0.2", 00:17:44.464 "trsvcid": "4420" 00:17:44.464 }, 00:17:44.464 "peer_address": { 00:17:44.464 "trtype": "TCP", 00:17:44.464 "adrfam": "IPv4", 00:17:44.464 "traddr": "10.0.0.1", 00:17:44.464 "trsvcid": "44338" 00:17:44.464 }, 00:17:44.464 "auth": { 00:17:44.464 "state": "completed", 00:17:44.464 "digest": "sha512", 00:17:44.464 "dhgroup": "ffdhe4096" 00:17:44.464 } 00:17:44.464 } 00:17:44.464 ]' 00:17:44.464 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.464 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.464 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.464 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.464 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.464 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.464 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.464 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.722 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:44.722 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.655 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.912 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.913 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.477 00:17:46.477 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.478 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.478 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.735 { 00:17:46.735 "cntlid": 123, 00:17:46.735 "qid": 0, 00:17:46.735 "state": "enabled", 00:17:46.735 "thread": "nvmf_tgt_poll_group_000", 00:17:46.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:46.735 "listen_address": { 00:17:46.735 "trtype": "TCP", 00:17:46.735 "adrfam": "IPv4", 00:17:46.735 "traddr": "10.0.0.2", 00:17:46.735 "trsvcid": "4420" 00:17:46.735 }, 00:17:46.735 "peer_address": { 00:17:46.735 "trtype": "TCP", 00:17:46.735 "adrfam": "IPv4", 00:17:46.735 "traddr": "10.0.0.1", 00:17:46.735 "trsvcid": "56588" 00:17:46.735 }, 00:17:46.735 "auth": { 00:17:46.735 "state": "completed", 00:17:46.735 "digest": "sha512", 00:17:46.735 "dhgroup": "ffdhe4096" 00:17:46.735 } 00:17:46.735 } 00:17:46.735 ]' 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.735 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.993 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:46.993 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.926 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.183 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.184 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.749 00:17:48.749 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.749 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.749 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.008 { 00:17:49.008 "cntlid": 125, 00:17:49.008 "qid": 0, 00:17:49.008 "state": "enabled", 00:17:49.008 "thread": "nvmf_tgt_poll_group_000", 00:17:49.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:49.008 "listen_address": { 00:17:49.008 "trtype": "TCP", 00:17:49.008 "adrfam": "IPv4", 00:17:49.008 "traddr": "10.0.0.2", 00:17:49.008 "trsvcid": "4420" 00:17:49.008 }, 00:17:49.008 "peer_address": { 00:17:49.008 "trtype": "TCP", 00:17:49.008 "adrfam": "IPv4", 00:17:49.008 "traddr": "10.0.0.1", 00:17:49.008 "trsvcid": "56608" 00:17:49.008 }, 00:17:49.008 "auth": { 00:17:49.008 "state": "completed", 00:17:49.008 "digest": "sha512", 00:17:49.008 "dhgroup": "ffdhe4096" 00:17:49.008 } 00:17:49.008 } 00:17:49.008 ]' 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.008 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.265 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:49.265 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.198 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.457 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.022 00:17:51.022 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.022 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.022 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.280 { 00:17:51.280 "cntlid": 127, 00:17:51.280 "qid": 0, 00:17:51.280 "state": "enabled", 00:17:51.280 "thread": "nvmf_tgt_poll_group_000", 00:17:51.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:51.280 "listen_address": { 00:17:51.280 "trtype": "TCP", 00:17:51.280 "adrfam": "IPv4", 00:17:51.280 "traddr": "10.0.0.2", 00:17:51.280 "trsvcid": "4420" 00:17:51.280 }, 00:17:51.280 "peer_address": { 00:17:51.280 "trtype": "TCP", 00:17:51.280 "adrfam": "IPv4", 00:17:51.280 "traddr": "10.0.0.1", 00:17:51.280 "trsvcid": "56638" 00:17:51.280 }, 00:17:51.280 "auth": { 00:17:51.280 "state": "completed", 00:17:51.280 "digest": "sha512", 00:17:51.280 "dhgroup": "ffdhe4096" 00:17:51.280 } 00:17:51.280 } 00:17:51.280 ]' 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.280 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.538 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:51.538 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.471 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.728 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:52.728 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.728 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.728 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.729 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.294 00:17:53.294 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.294 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.294 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.552 { 00:17:53.552 "cntlid": 129, 00:17:53.552 "qid": 0, 00:17:53.552 "state": "enabled", 00:17:53.552 "thread": "nvmf_tgt_poll_group_000", 00:17:53.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:53.552 "listen_address": { 00:17:53.552 "trtype": "TCP", 00:17:53.552 "adrfam": "IPv4", 00:17:53.552 "traddr": "10.0.0.2", 00:17:53.552 "trsvcid": "4420" 00:17:53.552 }, 00:17:53.552 "peer_address": { 00:17:53.552 "trtype": "TCP", 00:17:53.552 "adrfam": "IPv4", 00:17:53.552 "traddr": "10.0.0.1", 00:17:53.552 "trsvcid": "56664" 00:17:53.552 }, 00:17:53.552 "auth": { 00:17:53.552 "state": "completed", 00:17:53.552 "digest": "sha512", 00:17:53.552 "dhgroup": "ffdhe6144" 00:17:53.552 } 00:17:53.552 } 00:17:53.552 ]' 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.552 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.809 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.809 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.809 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.809 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.809 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.067 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:54.067 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.999 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.257 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.822 00:17:55.822 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.822 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.822 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.081 { 00:17:56.081 "cntlid": 131, 00:17:56.081 "qid": 0, 00:17:56.081 "state": "enabled", 00:17:56.081 "thread": "nvmf_tgt_poll_group_000", 00:17:56.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:56.081 "listen_address": { 00:17:56.081 "trtype": "TCP", 00:17:56.081 "adrfam": "IPv4", 00:17:56.081 "traddr": "10.0.0.2", 00:17:56.081 "trsvcid": "4420" 00:17:56.081 }, 00:17:56.081 "peer_address": { 00:17:56.081 "trtype": "TCP", 00:17:56.081 "adrfam": "IPv4", 00:17:56.081 "traddr": "10.0.0.1", 00:17:56.081 "trsvcid": "55558" 00:17:56.081 }, 00:17:56.081 "auth": { 00:17:56.081 "state": "completed", 00:17:56.081 "digest": "sha512", 00:17:56.081 "dhgroup": "ffdhe6144" 00:17:56.081 } 00:17:56.081 } 00:17:56.081 ]' 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.081 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.338 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:56.338 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.271 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.535 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.100 00:17:58.100 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.100 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.100 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.357 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.357 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.357 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.357 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.357 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.357 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.357 { 00:17:58.357 "cntlid": 133, 00:17:58.357 "qid": 0, 00:17:58.357 "state": "enabled", 00:17:58.357 "thread": "nvmf_tgt_poll_group_000", 00:17:58.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:58.357 "listen_address": { 00:17:58.357 "trtype": "TCP", 00:17:58.357 "adrfam": "IPv4", 00:17:58.358 "traddr": "10.0.0.2", 00:17:58.358 "trsvcid": "4420" 00:17:58.358 }, 00:17:58.358 "peer_address": { 00:17:58.358 "trtype": "TCP", 00:17:58.358 "adrfam": "IPv4", 00:17:58.358 "traddr": "10.0.0.1", 00:17:58.358 "trsvcid": "55592" 00:17:58.358 }, 00:17:58.358 "auth": { 00:17:58.358 "state": "completed", 00:17:58.358 "digest": "sha512", 00:17:58.358 "dhgroup": "ffdhe6144" 00:17:58.358 } 00:17:58.358 } 00:17:58.358 ]' 00:17:58.358 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.358 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.358 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.358 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.358 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.615 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.615 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.615 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.872 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:58.873 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.805 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.062 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.063 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.655 00:18:00.655 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.655 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.655 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.959 { 00:18:00.959 "cntlid": 135, 00:18:00.959 "qid": 0, 00:18:00.959 "state": "enabled", 00:18:00.959 "thread": "nvmf_tgt_poll_group_000", 00:18:00.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:00.959 "listen_address": { 00:18:00.959 "trtype": "TCP", 00:18:00.959 "adrfam": "IPv4", 00:18:00.959 "traddr": "10.0.0.2", 00:18:00.959 "trsvcid": "4420" 00:18:00.959 }, 00:18:00.959 "peer_address": { 00:18:00.959 "trtype": "TCP", 00:18:00.959 "adrfam": "IPv4", 00:18:00.959 "traddr": "10.0.0.1", 00:18:00.959 "trsvcid": "55626" 00:18:00.959 }, 00:18:00.959 "auth": { 00:18:00.959 "state": "completed", 00:18:00.959 "digest": "sha512", 00:18:00.959 "dhgroup": "ffdhe6144" 00:18:00.959 } 00:18:00.959 } 00:18:00.959 ]' 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.959 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.217 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:01.217 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.151 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.409 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.342 00:18:03.342 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.342 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.342 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.342 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.342 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.342 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.342 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.342 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.342 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.342 { 00:18:03.342 "cntlid": 137, 00:18:03.342 "qid": 0, 00:18:03.342 "state": "enabled", 00:18:03.342 "thread": "nvmf_tgt_poll_group_000", 00:18:03.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:03.342 "listen_address": { 00:18:03.342 "trtype": "TCP", 00:18:03.342 "adrfam": "IPv4", 00:18:03.342 "traddr": "10.0.0.2", 00:18:03.342 "trsvcid": "4420" 00:18:03.342 }, 00:18:03.342 "peer_address": { 00:18:03.342 "trtype": "TCP", 00:18:03.342 "adrfam": "IPv4", 00:18:03.342 "traddr": "10.0.0.1", 00:18:03.342 "trsvcid": "55662" 00:18:03.342 }, 00:18:03.342 "auth": { 00:18:03.342 "state": "completed", 00:18:03.342 "digest": "sha512", 00:18:03.342 "dhgroup": "ffdhe8192" 00:18:03.342 } 00:18:03.342 } 00:18:03.342 ]' 00:18:03.342 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.600 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.600 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.600 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.600 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.600 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.600 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.600 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.857 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:18:03.857 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.790 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.048 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.982 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.982 { 00:18:05.982 "cntlid": 139, 00:18:05.982 "qid": 0, 00:18:05.982 "state": "enabled", 00:18:05.982 "thread": "nvmf_tgt_poll_group_000", 00:18:05.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:05.982 "listen_address": { 00:18:05.982 "trtype": "TCP", 00:18:05.982 "adrfam": "IPv4", 00:18:05.982 "traddr": "10.0.0.2", 00:18:05.982 "trsvcid": "4420" 00:18:05.982 }, 00:18:05.982 "peer_address": { 00:18:05.982 "trtype": "TCP", 00:18:05.982 "adrfam": "IPv4", 00:18:05.982 "traddr": "10.0.0.1", 00:18:05.982 "trsvcid": "57228" 00:18:05.982 }, 00:18:05.982 "auth": { 00:18:05.982 "state": "completed", 00:18:05.982 "digest": "sha512", 00:18:05.982 "dhgroup": "ffdhe8192" 00:18:05.982 } 00:18:05.982 } 00:18:05.982 ]' 00:18:05.982 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.240 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.240 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.240 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.240 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.240 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.240 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.240 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.498 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:18:06.498 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: --dhchap-ctrl-secret DHHC-1:02:ZWM1YmFmOWRiZDkxOWMwMDA2ZDM2YTk5NmMzZDg3YThlZDE2ZDYxNTFjMGQ1NTdhrlreig==: 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.430 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.688 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.620 00:18:08.620 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.620 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.620 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.877 { 00:18:08.877 "cntlid": 141, 00:18:08.877 "qid": 0, 00:18:08.877 "state": "enabled", 00:18:08.877 "thread": "nvmf_tgt_poll_group_000", 00:18:08.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:08.877 "listen_address": { 00:18:08.877 "trtype": "TCP", 00:18:08.877 "adrfam": "IPv4", 00:18:08.877 "traddr": "10.0.0.2", 00:18:08.877 "trsvcid": "4420" 00:18:08.877 }, 00:18:08.877 "peer_address": { 00:18:08.877 "trtype": "TCP", 00:18:08.877 "adrfam": "IPv4", 00:18:08.877 "traddr": "10.0.0.1", 00:18:08.877 "trsvcid": "57248" 00:18:08.877 }, 00:18:08.877 "auth": { 00:18:08.877 "state": "completed", 00:18:08.877 "digest": "sha512", 00:18:08.877 "dhgroup": "ffdhe8192" 00:18:08.877 } 00:18:08.877 } 00:18:08.877 ]' 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.877 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.135 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.135 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.135 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.135 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.135 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.393 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:18:09.393 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:01:OGM5NDE2NmMzMmE3ZDBmY2U3YmE3OTE0MDM0YjA4OTn2ZOPO: 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.326 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.584 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.517 00:18:11.517 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.517 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.517 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.775 { 00:18:11.775 "cntlid": 143, 00:18:11.775 "qid": 0, 00:18:11.775 "state": "enabled", 00:18:11.775 "thread": "nvmf_tgt_poll_group_000", 00:18:11.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:11.775 "listen_address": { 00:18:11.775 "trtype": "TCP", 00:18:11.775 "adrfam": "IPv4", 00:18:11.775 "traddr": "10.0.0.2", 00:18:11.775 "trsvcid": "4420" 00:18:11.775 }, 00:18:11.775 "peer_address": { 00:18:11.775 "trtype": "TCP", 00:18:11.775 "adrfam": "IPv4", 00:18:11.775 "traddr": "10.0.0.1", 00:18:11.775 "trsvcid": "57272" 00:18:11.775 }, 00:18:11.775 "auth": { 00:18:11.775 "state": "completed", 00:18:11.775 "digest": "sha512", 00:18:11.775 "dhgroup": "ffdhe8192" 00:18:11.775 } 00:18:11.775 } 00:18:11.775 ]' 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.775 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.033 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:12.033 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.966 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.536 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.468 00:18:14.468 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.468 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.468 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.468 { 00:18:14.468 "cntlid": 145, 00:18:14.468 "qid": 0, 00:18:14.468 "state": "enabled", 00:18:14.468 "thread": "nvmf_tgt_poll_group_000", 00:18:14.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:14.468 "listen_address": { 00:18:14.468 "trtype": "TCP", 00:18:14.468 "adrfam": "IPv4", 00:18:14.468 "traddr": "10.0.0.2", 00:18:14.468 "trsvcid": "4420" 00:18:14.468 }, 00:18:14.468 "peer_address": { 00:18:14.468 "trtype": "TCP", 00:18:14.468 "adrfam": "IPv4", 00:18:14.468 "traddr": "10.0.0.1", 00:18:14.468 "trsvcid": "57298" 00:18:14.468 }, 00:18:14.468 "auth": { 00:18:14.468 "state": "completed", 00:18:14.468 "digest": "sha512", 00:18:14.468 "dhgroup": "ffdhe8192" 00:18:14.468 } 00:18:14.468 } 00:18:14.468 ]' 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.468 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.726 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.726 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.726 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.726 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.726 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.983 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:18:14.984 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Y2Q5ZTA3YWQ1ZjUzZDRlNmJlN2NlNjk3NGQxZmFjYWE1NmMyZTZhYzliMWFlY2M4W8PU0g==: --dhchap-ctrl-secret DHHC-1:03:OTlhZWU3ZjI2YWY1NTIzMmUzOTRjOTU3NzQwMGViYmVjMTg4ODYzYWE5MTY4ZTVjNTVjYmFhODc5NWIwZjU5Nai03eM=: 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:15.916 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:16.849 request: 00:18:16.849 { 00:18:16.849 "name": "nvme0", 00:18:16.849 "trtype": "tcp", 00:18:16.849 "traddr": "10.0.0.2", 00:18:16.849 "adrfam": "ipv4", 00:18:16.849 "trsvcid": "4420", 00:18:16.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:16.849 "prchk_reftag": false, 00:18:16.849 "prchk_guard": false, 00:18:16.849 "hdgst": false, 00:18:16.849 "ddgst": false, 00:18:16.849 "dhchap_key": "key2", 00:18:16.849 "allow_unrecognized_csi": false, 00:18:16.849 "method": "bdev_nvme_attach_controller", 00:18:16.849 "req_id": 1 00:18:16.849 } 00:18:16.849 Got JSON-RPC error response 00:18:16.849 response: 00:18:16.849 { 00:18:16.849 "code": -5, 00:18:16.849 "message": "Input/output error" 00:18:16.849 } 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.849 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:17.414 request: 00:18:17.414 { 00:18:17.414 "name": "nvme0", 00:18:17.414 "trtype": "tcp", 00:18:17.414 "traddr": "10.0.0.2", 00:18:17.414 "adrfam": "ipv4", 00:18:17.414 "trsvcid": "4420", 00:18:17.414 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:17.414 "prchk_reftag": false, 00:18:17.414 "prchk_guard": false, 00:18:17.414 "hdgst": false, 00:18:17.414 "ddgst": false, 00:18:17.414 "dhchap_key": "key1", 00:18:17.414 "dhchap_ctrlr_key": "ckey2", 00:18:17.414 "allow_unrecognized_csi": false, 00:18:17.414 "method": "bdev_nvme_attach_controller", 00:18:17.414 "req_id": 1 00:18:17.414 } 00:18:17.414 Got JSON-RPC error response 00:18:17.414 response: 00:18:17.414 { 00:18:17.414 "code": -5, 00:18:17.414 "message": "Input/output error" 00:18:17.414 } 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.414 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.346 request: 00:18:18.346 { 00:18:18.346 "name": "nvme0", 00:18:18.346 "trtype": "tcp", 00:18:18.346 "traddr": "10.0.0.2", 00:18:18.346 "adrfam": "ipv4", 00:18:18.346 "trsvcid": "4420", 00:18:18.346 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:18.346 "prchk_reftag": false, 00:18:18.346 "prchk_guard": false, 00:18:18.346 "hdgst": false, 00:18:18.346 "ddgst": false, 00:18:18.346 "dhchap_key": "key1", 00:18:18.346 "dhchap_ctrlr_key": "ckey1", 00:18:18.346 "allow_unrecognized_csi": false, 00:18:18.346 "method": "bdev_nvme_attach_controller", 00:18:18.346 "req_id": 1 00:18:18.346 } 00:18:18.346 Got JSON-RPC error response 00:18:18.346 response: 00:18:18.346 { 00:18:18.346 "code": -5, 00:18:18.346 "message": "Input/output error" 00:18:18.346 } 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 52664 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 52664 ']' 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 52664 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.346 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 52664 00:18:18.346 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.346 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.346 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 52664' 00:18:18.346 killing process with pid 52664 00:18:18.346 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 52664 00:18:18.346 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 52664 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=75525 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 75525 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75525 ']' 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.605 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 75525 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75525 ']' 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.863 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.430 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:19.430 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:19.430 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 null0 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dzu 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.W2y ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2y 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.R6W 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.M0Q ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M0Q 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ULb 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.8E8 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E8 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4kf 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.430 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.800 nvme0n1 00:18:20.800 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.800 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.800 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.058 { 00:18:21.058 "cntlid": 1, 00:18:21.058 "qid": 0, 00:18:21.058 "state": "enabled", 00:18:21.058 "thread": "nvmf_tgt_poll_group_000", 00:18:21.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:21.058 "listen_address": { 00:18:21.058 "trtype": "TCP", 00:18:21.058 "adrfam": "IPv4", 00:18:21.058 "traddr": "10.0.0.2", 00:18:21.058 "trsvcid": "4420" 00:18:21.058 }, 00:18:21.058 "peer_address": { 00:18:21.058 "trtype": "TCP", 00:18:21.058 "adrfam": "IPv4", 00:18:21.058 "traddr": "10.0.0.1", 00:18:21.058 "trsvcid": "54066" 00:18:21.058 }, 00:18:21.058 "auth": { 00:18:21.058 "state": "completed", 00:18:21.058 "digest": "sha512", 00:18:21.058 "dhgroup": "ffdhe8192" 00:18:21.058 } 00:18:21.058 } 00:18:21.058 ]' 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.058 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.315 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.315 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.315 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.315 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.315 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.573 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:21.573 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:22.506 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.763 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.764 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.021 request: 00:18:23.021 { 00:18:23.021 "name": "nvme0", 00:18:23.021 "trtype": "tcp", 00:18:23.021 "traddr": "10.0.0.2", 00:18:23.021 "adrfam": "ipv4", 00:18:23.021 "trsvcid": "4420", 00:18:23.021 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:23.021 "prchk_reftag": false, 00:18:23.021 "prchk_guard": false, 00:18:23.021 "hdgst": false, 00:18:23.021 "ddgst": false, 00:18:23.021 "dhchap_key": "key3", 00:18:23.021 "allow_unrecognized_csi": false, 00:18:23.021 "method": "bdev_nvme_attach_controller", 00:18:23.021 "req_id": 1 00:18:23.021 } 00:18:23.021 Got JSON-RPC error response 00:18:23.021 response: 00:18:23.021 { 00:18:23.021 "code": -5, 00:18:23.021 "message": "Input/output error" 00:18:23.021 } 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.021 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.279 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.537 request: 00:18:23.537 { 00:18:23.537 "name": "nvme0", 00:18:23.537 "trtype": "tcp", 00:18:23.537 "traddr": "10.0.0.2", 00:18:23.537 "adrfam": "ipv4", 00:18:23.537 "trsvcid": "4420", 00:18:23.537 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:23.537 "prchk_reftag": false, 00:18:23.537 "prchk_guard": false, 00:18:23.537 "hdgst": false, 00:18:23.537 "ddgst": false, 00:18:23.537 "dhchap_key": "key3", 00:18:23.537 "allow_unrecognized_csi": false, 00:18:23.537 "method": "bdev_nvme_attach_controller", 00:18:23.537 "req_id": 1 00:18:23.537 } 00:18:23.537 Got JSON-RPC error response 00:18:23.537 response: 00:18:23.537 { 00:18:23.537 "code": -5, 00:18:23.537 "message": "Input/output error" 00:18:23.537 } 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.537 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.794 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.794 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.794 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.794 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.794 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.794 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.794 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.052 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.617 request: 00:18:24.617 { 00:18:24.617 "name": "nvme0", 00:18:24.617 "trtype": "tcp", 00:18:24.617 "traddr": "10.0.0.2", 00:18:24.617 "adrfam": "ipv4", 00:18:24.617 "trsvcid": "4420", 00:18:24.617 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:24.617 "prchk_reftag": false, 00:18:24.617 "prchk_guard": false, 00:18:24.617 "hdgst": false, 00:18:24.617 "ddgst": false, 00:18:24.617 "dhchap_key": "key0", 00:18:24.617 "dhchap_ctrlr_key": "key1", 00:18:24.617 "allow_unrecognized_csi": false, 00:18:24.617 "method": "bdev_nvme_attach_controller", 00:18:24.617 "req_id": 1 00:18:24.617 } 00:18:24.617 Got JSON-RPC error response 00:18:24.617 response: 00:18:24.617 { 00:18:24.617 "code": -5, 00:18:24.617 "message": "Input/output error" 00:18:24.617 } 00:18:24.617 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:24.617 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.617 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.617 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.617 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:24.617 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:24.617 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:24.875 nvme0n1 00:18:24.875 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:24.875 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.875 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:25.132 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.132 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.132 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.390 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:18:25.390 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.390 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.390 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.390 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:25.390 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:25.390 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:26.824 nvme0n1 00:18:26.824 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:26.824 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:26.824 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:27.082 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.339 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.340 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:27.340 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: --dhchap-ctrl-secret DHHC-1:03:Yzk1MGVmNWY1YzIyN2E4ZDRmZWY3NTI3NDhkYTliNTQ4Mzk0YWU3MDdmN2IxN2UyOWEzMzkzNTUyZjJiMzgzMEre94s=: 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.271 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:28.529 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.530 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:29.462 request: 00:18:29.462 { 00:18:29.462 "name": "nvme0", 00:18:29.462 "trtype": "tcp", 00:18:29.462 "traddr": "10.0.0.2", 00:18:29.462 "adrfam": "ipv4", 00:18:29.462 "trsvcid": "4420", 00:18:29.462 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:29.462 "prchk_reftag": false, 00:18:29.462 "prchk_guard": false, 00:18:29.462 "hdgst": false, 00:18:29.462 "ddgst": false, 00:18:29.462 "dhchap_key": "key1", 00:18:29.462 "allow_unrecognized_csi": false, 00:18:29.462 "method": "bdev_nvme_attach_controller", 00:18:29.462 "req_id": 1 00:18:29.462 } 00:18:29.462 Got JSON-RPC error response 00:18:29.462 response: 00:18:29.462 { 00:18:29.462 "code": -5, 00:18:29.462 "message": "Input/output error" 00:18:29.462 } 00:18:29.462 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.462 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.462 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.462 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.462 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.462 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.463 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.834 nvme0n1 00:18:30.834 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:30.834 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:30.834 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.092 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.092 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.092 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.350 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.350 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.350 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.350 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.350 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:31.350 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:31.350 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:31.607 nvme0n1 00:18:31.607 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:31.607 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:31.607 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.864 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.864 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.864 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: '' 2s 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: ]] 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzRhOGJmYjdhM2U1YWVkNzhhZmI2NDJjZDEzNmI2YTfcfUPg: 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:32.121 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:34.647 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: 2s 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: ]] 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGFlNmI3YTlkMjU3MGI4OWZmYjQ0NjlmNjI5NGUwMGQ5Y2VlZDM5NmI2YzgwNDliXRYPvg==: 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:34.648 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.545 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.918 nvme0n1 00:18:37.918 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.918 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.918 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.918 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.918 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.918 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:38.483 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:38.483 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:38.483 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.741 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.741 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.741 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.741 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.741 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.741 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:38.741 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:38.998 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:38.998 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.998 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.256 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:40.188 request: 00:18:40.188 { 00:18:40.188 "name": "nvme0", 00:18:40.188 "dhchap_key": "key1", 00:18:40.188 "dhchap_ctrlr_key": "key3", 00:18:40.188 "method": "bdev_nvme_set_keys", 00:18:40.188 "req_id": 1 00:18:40.188 } 00:18:40.188 Got JSON-RPC error response 00:18:40.188 response: 00:18:40.188 { 00:18:40.188 "code": -13, 00:18:40.188 "message": "Permission denied" 00:18:40.188 } 00:18:40.188 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:40.188 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.188 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.188 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.188 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:40.188 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:40.188 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.445 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:40.445 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:41.376 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:41.376 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:41.376 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.633 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:43.002 nvme0n1 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.935 request: 00:18:43.935 { 00:18:43.935 "name": "nvme0", 00:18:43.935 "dhchap_key": "key2", 00:18:43.935 "dhchap_ctrlr_key": "key0", 00:18:43.935 "method": "bdev_nvme_set_keys", 00:18:43.935 "req_id": 1 00:18:43.935 } 00:18:43.935 Got JSON-RPC error response 00:18:43.935 response: 00:18:43.935 { 00:18:43.935 "code": -13, 00:18:43.935 "message": "Permission denied" 00:18:43.935 } 00:18:43.935 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:43.935 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.935 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.935 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.935 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:43.935 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.935 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:44.194 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:44.194 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:45.125 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:45.126 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:45.126 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.384 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:45.384 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:45.384 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:45.384 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 52688 00:18:45.384 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 52688 ']' 00:18:45.384 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 52688 00:18:45.384 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:45.642 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.642 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 52688 00:18:45.642 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.642 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.642 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 52688' 00:18:45.642 killing process with pid 52688 00:18:45.642 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 52688 00:18:45.642 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 52688 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:45.899 rmmod nvme_tcp 00:18:45.899 rmmod nvme_fabrics 00:18:45.899 rmmod nvme_keyring 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 75525 ']' 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 75525 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 75525 ']' 00:18:45.899 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 75525 00:18:45.900 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:45.900 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.900 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75525 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75525' 00:18:46.158 killing process with pid 75525 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 75525 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 75525 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.158 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.dzu /tmp/spdk.key-sha256.R6W /tmp/spdk.key-sha384.ULb /tmp/spdk.key-sha512.4kf /tmp/spdk.key-sha512.W2y /tmp/spdk.key-sha384.M0Q /tmp/spdk.key-sha256.8E8 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:48.703 00:18:48.703 real 3m31.545s 00:18:48.703 user 8m16.673s 00:18:48.703 sys 0m27.999s 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 ************************************ 00:18:48.703 END TEST nvmf_auth_target 00:18:48.703 ************************************ 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 ************************************ 00:18:48.703 START TEST nvmf_bdevio_no_huge 00:18:48.703 ************************************ 00:18:48.703 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:48.703 * Looking for test storage... 00:18:48.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.703 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.704 --rc genhtml_branch_coverage=1 00:18:48.704 --rc genhtml_function_coverage=1 00:18:48.704 --rc genhtml_legend=1 00:18:48.704 --rc geninfo_all_blocks=1 00:18:48.704 --rc geninfo_unexecuted_blocks=1 00:18:48.704 00:18:48.704 ' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.704 --rc genhtml_branch_coverage=1 00:18:48.704 --rc genhtml_function_coverage=1 00:18:48.704 --rc genhtml_legend=1 00:18:48.704 --rc geninfo_all_blocks=1 00:18:48.704 --rc geninfo_unexecuted_blocks=1 00:18:48.704 00:18:48.704 ' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.704 --rc genhtml_branch_coverage=1 00:18:48.704 --rc genhtml_function_coverage=1 00:18:48.704 --rc genhtml_legend=1 00:18:48.704 --rc geninfo_all_blocks=1 00:18:48.704 --rc geninfo_unexecuted_blocks=1 00:18:48.704 00:18:48.704 ' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.704 --rc genhtml_branch_coverage=1 00:18:48.704 --rc genhtml_function_coverage=1 00:18:48.704 --rc genhtml_legend=1 00:18:48.704 --rc geninfo_all_blocks=1 00:18:48.704 --rc geninfo_unexecuted_blocks=1 00:18:48.704 00:18:48.704 ' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:48.704 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.610 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.610 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:50.611 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.870 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.870 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.870 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:50.870 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:50.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:18:50.870 00:18:50.870 --- 10.0.0.2 ping statistics --- 00:18:50.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.870 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:18:50.870 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:18:50.871 00:18:50.871 --- 10.0.0.1 ping statistics --- 00:18:50.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.871 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=80775 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 80775 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 80775 ']' 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.871 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.871 [2024-12-10 22:49:58.438078] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:50.871 [2024-12-10 22:49:58.438171] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:50.871 [2024-12-10 22:49:58.520633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:50.871 [2024-12-10 22:49:58.582081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.871 [2024-12-10 22:49:58.582145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.871 [2024-12-10 22:49:58.582158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.871 [2024-12-10 22:49:58.582169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.871 [2024-12-10 22:49:58.582177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.871 [2024-12-10 22:49:58.583217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:18:50.871 [2024-12-10 22:49:58.583280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:18:50.871 [2024-12-10 22:49:58.583332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:18:50.871 [2024-12-10 22:49:58.583335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.130 [2024-12-10 22:49:58.747611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.130 Malloc0 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.130 [2024-12-10 22:49:58.785602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:51.130 { 00:18:51.130 "params": { 00:18:51.130 "name": "Nvme$subsystem", 00:18:51.130 "trtype": "$TEST_TRANSPORT", 00:18:51.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.130 "adrfam": "ipv4", 00:18:51.130 "trsvcid": "$NVMF_PORT", 00:18:51.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.130 "hdgst": ${hdgst:-false}, 00:18:51.130 "ddgst": ${ddgst:-false} 00:18:51.130 }, 00:18:51.130 "method": "bdev_nvme_attach_controller" 00:18:51.130 } 00:18:51.130 EOF 00:18:51.130 )") 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:51.130 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:51.130 "params": { 00:18:51.130 "name": "Nvme1", 00:18:51.130 "trtype": "tcp", 00:18:51.130 "traddr": "10.0.0.2", 00:18:51.130 "adrfam": "ipv4", 00:18:51.130 "trsvcid": "4420", 00:18:51.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.130 "hdgst": false, 00:18:51.130 "ddgst": false 00:18:51.130 }, 00:18:51.130 "method": "bdev_nvme_attach_controller" 00:18:51.130 }' 00:18:51.130 [2024-12-10 22:49:58.836674] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:51.130 [2024-12-10 22:49:58.836765] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid80806 ] 00:18:51.388 [2024-12-10 22:49:58.909144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:51.389 [2024-12-10 22:49:58.974242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.389 [2024-12-10 22:49:58.974292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.389 [2024-12-10 22:49:58.974295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.646 I/O targets: 00:18:51.646 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:51.646 00:18:51.646 00:18:51.646 CUnit - A unit testing framework for C - Version 2.1-3 00:18:51.646 http://cunit.sourceforge.net/ 00:18:51.646 00:18:51.646 00:18:51.646 Suite: bdevio tests on: Nvme1n1 00:18:51.646 Test: blockdev write read block ...passed 00:18:51.904 Test: blockdev write zeroes read block ...passed 00:18:51.904 Test: blockdev write zeroes read no split ...passed 00:18:51.904 Test: blockdev write zeroes read split ...passed 00:18:51.904 Test: blockdev write zeroes read split partial ...passed 00:18:51.904 Test: blockdev reset ...[2024-12-10 22:49:59.522964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:51.904 [2024-12-10 22:49:59.523081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19272f0 (9): Bad file descriptor 00:18:51.904 [2024-12-10 22:49:59.580484] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:51.904 passed 00:18:51.904 Test: blockdev write read 8 blocks ...passed 00:18:52.162 Test: blockdev write read size > 128k ...passed 00:18:52.162 Test: blockdev write read invalid size ...passed 00:18:52.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:52.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:52.162 Test: blockdev write read max offset ...passed 00:18:52.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:52.162 Test: blockdev writev readv 8 blocks ...passed 00:18:52.162 Test: blockdev writev readv 30 x 1block ...passed 00:18:52.162 Test: blockdev writev readv block ...passed 00:18:52.162 Test: blockdev writev readv size > 128k ...passed 00:18:52.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:52.162 Test: blockdev comparev and writev ...[2024-12-10 22:49:59.835487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.835524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.162 [2024-12-10 22:49:59.835556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.835576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.162 [2024-12-10 22:49:59.835889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.835916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.162 [2024-12-10 22:49:59.835938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.835955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.162 [2024-12-10 22:49:59.836258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.836283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.162 [2024-12-10 22:49:59.836305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.836321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.162 [2024-12-10 22:49:59.836637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.836662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.162 [2024-12-10 22:49:59.836683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.162 [2024-12-10 22:49:59.836700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.162 passed 00:18:52.420 Test: blockdev nvme passthru rw ...passed 00:18:52.420 Test: blockdev nvme passthru vendor specific ...[2024-12-10 22:49:59.920786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.420 [2024-12-10 22:49:59.920814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:52.420 [2024-12-10 22:49:59.920958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.420 [2024-12-10 22:49:59.920987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:52.420 [2024-12-10 22:49:59.921126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.420 [2024-12-10 22:49:59.921148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:52.420 [2024-12-10 22:49:59.921285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.420 [2024-12-10 22:49:59.921307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:52.420 passed 00:18:52.420 Test: blockdev nvme admin passthru ...passed 00:18:52.420 Test: blockdev copy ...passed 00:18:52.420 00:18:52.420 Run Summary: Type Total Ran Passed Failed Inactive 00:18:52.420 suites 1 1 n/a 0 0 00:18:52.420 tests 23 23 23 0 0 00:18:52.420 asserts 152 152 152 0 n/a 00:18:52.420 00:18:52.420 Elapsed time = 1.308 seconds 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.678 rmmod nvme_tcp 00:18:52.678 rmmod nvme_fabrics 00:18:52.678 rmmod nvme_keyring 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 80775 ']' 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 80775 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 80775 ']' 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 80775 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.678 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80775 00:18:52.937 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:52.937 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:52.937 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80775' 00:18:52.937 killing process with pid 80775 00:18:52.937 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 80775 00:18:52.937 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 80775 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.197 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:55.771 00:18:55.771 real 0m6.891s 00:18:55.771 user 0m12.154s 00:18:55.771 sys 0m2.701s 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.771 ************************************ 00:18:55.771 END TEST nvmf_bdevio_no_huge 00:18:55.771 ************************************ 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.771 ************************************ 00:18:55.771 START TEST nvmf_tls 00:18:55.771 ************************************ 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:55.771 * Looking for test storage... 00:18:55.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.771 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.771 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.771 --rc genhtml_branch_coverage=1 00:18:55.772 --rc genhtml_function_coverage=1 00:18:55.772 --rc genhtml_legend=1 00:18:55.772 --rc geninfo_all_blocks=1 00:18:55.772 --rc geninfo_unexecuted_blocks=1 00:18:55.772 00:18:55.772 ' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.772 --rc genhtml_branch_coverage=1 00:18:55.772 --rc genhtml_function_coverage=1 00:18:55.772 --rc genhtml_legend=1 00:18:55.772 --rc geninfo_all_blocks=1 00:18:55.772 --rc geninfo_unexecuted_blocks=1 00:18:55.772 00:18:55.772 ' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.772 --rc genhtml_branch_coverage=1 00:18:55.772 --rc genhtml_function_coverage=1 00:18:55.772 --rc genhtml_legend=1 00:18:55.772 --rc geninfo_all_blocks=1 00:18:55.772 --rc geninfo_unexecuted_blocks=1 00:18:55.772 00:18:55.772 ' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.772 --rc genhtml_branch_coverage=1 00:18:55.772 --rc genhtml_function_coverage=1 00:18:55.772 --rc genhtml_legend=1 00:18:55.772 --rc geninfo_all_blocks=1 00:18:55.772 --rc geninfo_unexecuted_blocks=1 00:18:55.772 00:18:55.772 ' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.772 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.699 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:57.700 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:57.700 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:57.700 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:57.700 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:18:57.700 00:18:57.700 --- 10.0.0.2 ping statistics --- 00:18:57.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.700 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:18:57.700 00:18:57.700 --- 10.0.0.1 ping statistics --- 00:18:57.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.700 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83027 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83027 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83027 ']' 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.700 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.960 [2024-12-10 22:50:05.480798] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:57.960 [2024-12-10 22:50:05.480885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.960 [2024-12-10 22:50:05.563578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.960 [2024-12-10 22:50:05.621725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.960 [2024-12-10 22:50:05.621794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.960 [2024-12-10 22:50:05.621807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.960 [2024-12-10 22:50:05.621826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.960 [2024-12-10 22:50:05.621836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.960 [2024-12-10 22:50:05.622501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:58.219 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:58.477 true 00:18:58.477 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.477 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:58.734 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:58.734 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:58.735 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:58.993 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.993 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:59.252 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:59.252 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:59.252 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:59.510 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.510 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:59.768 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:59.768 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:59.768 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.768 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:00.025 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:00.025 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:00.025 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:00.282 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.282 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:00.541 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:00.541 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:00.541 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:00.799 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.799 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:01.059 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9gny5Vn9Jd 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.sF39X5ILYt 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9gny5Vn9Jd 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.sF39X5ILYt 00:19:01.319 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:01.578 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:01.836 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9gny5Vn9Jd 00:19:01.836 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9gny5Vn9Jd 00:19:01.836 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.407 [2024-12-10 22:50:09.849530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.407 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.666 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:02.925 [2024-12-10 22:50:10.459184] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.925 [2024-12-10 22:50:10.459437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.925 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.183 malloc0 00:19:03.183 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.441 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9gny5Vn9Jd 00:19:04.010 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.010 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9gny5Vn9Jd 00:19:16.226 Initializing NVMe Controllers 00:19:16.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:16.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:16.226 Initialization complete. Launching workers. 00:19:16.226 ======================================================== 00:19:16.226 Latency(us) 00:19:16.226 Device Information : IOPS MiB/s Average min max 00:19:16.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8653.74 33.80 7397.65 1171.63 8848.41 00:19:16.226 ======================================================== 00:19:16.226 Total : 8653.74 33.80 7397.65 1171.63 8848.41 00:19:16.226 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gny5Vn9Jd 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9gny5Vn9Jd 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84932 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84932 /var/tmp/bdevperf.sock 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84932 ']' 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.226 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.226 [2024-12-10 22:50:21.872726] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:16.226 [2024-12-10 22:50:21.872816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84932 ] 00:19:16.226 [2024-12-10 22:50:21.941184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.226 [2024-12-10 22:50:21.999497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.226 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.226 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:16.226 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9gny5Vn9Jd 00:19:16.226 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:16.226 [2024-12-10 22:50:22.642519] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.226 TLSTESTn1 00:19:16.227 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:16.227 Running I/O for 10 seconds... 00:19:17.166 3407.00 IOPS, 13.31 MiB/s [2024-12-10T21:50:26.279Z] 3485.50 IOPS, 13.62 MiB/s [2024-12-10T21:50:26.847Z] 3519.00 IOPS, 13.75 MiB/s [2024-12-10T21:50:28.221Z] 3539.25 IOPS, 13.83 MiB/s [2024-12-10T21:50:29.157Z] 3551.80 IOPS, 13.87 MiB/s [2024-12-10T21:50:30.095Z] 3533.50 IOPS, 13.80 MiB/s [2024-12-10T21:50:31.028Z] 3544.43 IOPS, 13.85 MiB/s [2024-12-10T21:50:31.968Z] 3534.25 IOPS, 13.81 MiB/s [2024-12-10T21:50:32.904Z] 3539.22 IOPS, 13.83 MiB/s [2024-12-10T21:50:32.904Z] 3539.80 IOPS, 13.83 MiB/s 00:19:25.172 Latency(us) 00:19:25.172 [2024-12-10T21:50:32.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.172 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.172 Verification LBA range: start 0x0 length 0x2000 00:19:25.172 TLSTESTn1 : 10.02 3544.75 13.85 0.00 0.00 36048.20 8204.14 33399.09 00:19:25.172 [2024-12-10T21:50:32.904Z] =================================================================================================================== 00:19:25.172 [2024-12-10T21:50:32.904Z] Total : 3544.75 13.85 0.00 0.00 36048.20 8204.14 33399.09 00:19:25.172 { 00:19:25.172 "results": [ 00:19:25.172 { 00:19:25.172 "job": "TLSTESTn1", 00:19:25.172 "core_mask": "0x4", 00:19:25.172 "workload": "verify", 00:19:25.172 "status": "finished", 00:19:25.172 "verify_range": { 00:19:25.172 "start": 0, 00:19:25.172 "length": 8192 00:19:25.172 }, 00:19:25.172 "queue_depth": 128, 00:19:25.172 "io_size": 4096, 00:19:25.172 "runtime": 10.021584, 00:19:25.172 "iops": 3544.7490137287677, 00:19:25.172 "mibps": 13.846675834877999, 00:19:25.172 "io_failed": 0, 00:19:25.172 "io_timeout": 0, 00:19:25.172 "avg_latency_us": 36048.20065007694, 00:19:25.172 "min_latency_us": 8204.136296296296, 00:19:25.172 "max_latency_us": 33399.08740740741 00:19:25.172 } 00:19:25.172 ], 00:19:25.172 "core_count": 1 00:19:25.172 } 00:19:25.172 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:25.172 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84932 00:19:25.172 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84932 ']' 00:19:25.172 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84932 00:19:25.172 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.172 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.172 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84932 00:19:25.432 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.432 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.432 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84932' 00:19:25.432 killing process with pid 84932 00:19:25.432 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84932 00:19:25.432 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.432 00:19:25.432 Latency(us) 00:19:25.432 [2024-12-10T21:50:33.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.432 [2024-12-10T21:50:33.164Z] =================================================================================================================== 00:19:25.432 [2024-12-10T21:50:33.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.432 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84932 00:19:25.432 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sF39X5ILYt 00:19:25.432 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:25.432 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sF39X5ILYt 00:19:25.432 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:25.432 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.432 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:25.691 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.691 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sF39X5ILYt 00:19:25.691 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.691 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sF39X5ILYt 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86257 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86257 /var/tmp/bdevperf.sock 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86257 ']' 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.692 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.692 [2024-12-10 22:50:33.204594] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:25.692 [2024-12-10 22:50:33.204685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86257 ] 00:19:25.692 [2024-12-10 22:50:33.270270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.692 [2024-12-10 22:50:33.328647] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.950 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.950 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.950 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sF39X5ILYt 00:19:26.208 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.467 [2024-12-10 22:50:33.989641] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.467 [2024-12-10 22:50:33.995240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.467 [2024-12-10 22:50:33.995736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8cef70 (107): Transport endpoint is not connected 00:19:26.467 [2024-12-10 22:50:33.996725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8cef70 (9): Bad file descriptor 00:19:26.467 [2024-12-10 22:50:33.997724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:26.467 [2024-12-10 22:50:33.997746] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.467 [2024-12-10 22:50:33.997760] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:26.467 [2024-12-10 22:50:33.997776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:26.467 request: 00:19:26.467 { 00:19:26.467 "name": "TLSTEST", 00:19:26.467 "trtype": "tcp", 00:19:26.467 "traddr": "10.0.0.2", 00:19:26.467 "adrfam": "ipv4", 00:19:26.467 "trsvcid": "4420", 00:19:26.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.467 "prchk_reftag": false, 00:19:26.467 "prchk_guard": false, 00:19:26.467 "hdgst": false, 00:19:26.467 "ddgst": false, 00:19:26.467 "psk": "key0", 00:19:26.467 "allow_unrecognized_csi": false, 00:19:26.467 "method": "bdev_nvme_attach_controller", 00:19:26.467 "req_id": 1 00:19:26.467 } 00:19:26.467 Got JSON-RPC error response 00:19:26.467 response: 00:19:26.467 { 00:19:26.467 "code": -5, 00:19:26.467 "message": "Input/output error" 00:19:26.467 } 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86257 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86257 ']' 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86257 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86257 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86257' 00:19:26.467 killing process with pid 86257 00:19:26.467 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86257 00:19:26.467 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.467 00:19:26.467 Latency(us) 00:19:26.467 [2024-12-10T21:50:34.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.467 [2024-12-10T21:50:34.199Z] =================================================================================================================== 00:19:26.467 [2024-12-10T21:50:34.200Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.468 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86257 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9gny5Vn9Jd 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9gny5Vn9Jd 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9gny5Vn9Jd 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9gny5Vn9Jd 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86399 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86399 /var/tmp/bdevperf.sock 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86399 ']' 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.727 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.727 [2024-12-10 22:50:34.327048] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:26.727 [2024-12-10 22:50:34.327137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86399 ] 00:19:26.727 [2024-12-10 22:50:34.397797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.727 [2024-12-10 22:50:34.453265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.985 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.985 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.985 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9gny5Vn9Jd 00:19:27.243 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:27.502 [2024-12-10 22:50:35.099171] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.502 [2024-12-10 22:50:35.107484] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:27.502 [2024-12-10 22:50:35.107514] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:27.502 [2024-12-10 22:50:35.107574] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.502 [2024-12-10 22:50:35.108245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f98f70 (107): Transport endpoint is not connected 00:19:27.502 [2024-12-10 22:50:35.109237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f98f70 (9): Bad file descriptor 00:19:27.502 [2024-12-10 22:50:35.110237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:27.502 [2024-12-10 22:50:35.110255] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.502 [2024-12-10 22:50:35.110282] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:27.502 [2024-12-10 22:50:35.110297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:27.502 request: 00:19:27.502 { 00:19:27.502 "name": "TLSTEST", 00:19:27.502 "trtype": "tcp", 00:19:27.502 "traddr": "10.0.0.2", 00:19:27.502 "adrfam": "ipv4", 00:19:27.502 "trsvcid": "4420", 00:19:27.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.502 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:27.502 "prchk_reftag": false, 00:19:27.502 "prchk_guard": false, 00:19:27.502 "hdgst": false, 00:19:27.502 "ddgst": false, 00:19:27.502 "psk": "key0", 00:19:27.502 "allow_unrecognized_csi": false, 00:19:27.502 "method": "bdev_nvme_attach_controller", 00:19:27.502 "req_id": 1 00:19:27.502 } 00:19:27.502 Got JSON-RPC error response 00:19:27.502 response: 00:19:27.502 { 00:19:27.502 "code": -5, 00:19:27.502 "message": "Input/output error" 00:19:27.502 } 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86399 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86399 ']' 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86399 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86399 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86399' 00:19:27.502 killing process with pid 86399 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86399 00:19:27.502 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.502 00:19:27.502 Latency(us) 00:19:27.502 [2024-12-10T21:50:35.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.502 [2024-12-10T21:50:35.234Z] =================================================================================================================== 00:19:27.502 [2024-12-10T21:50:35.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.502 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86399 00:19:27.760 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.760 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:27.760 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.760 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.760 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.760 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gny5Vn9Jd 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gny5Vn9Jd 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9gny5Vn9Jd 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9gny5Vn9Jd 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86542 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86542 /var/tmp/bdevperf.sock 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86542 ']' 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.761 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.761 [2024-12-10 22:50:35.401834] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:27.761 [2024-12-10 22:50:35.401923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86542 ] 00:19:27.761 [2024-12-10 22:50:35.468422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.019 [2024-12-10 22:50:35.530220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.019 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.019 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.019 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9gny5Vn9Jd 00:19:28.277 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.535 [2024-12-10 22:50:36.165737] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.535 [2024-12-10 22:50:36.171399] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:28.535 [2024-12-10 22:50:36.171433] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:28.535 [2024-12-10 22:50:36.171472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:28.535 [2024-12-10 22:50:36.172021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abaf70 (107): Transport endpoint is not connected 00:19:28.535 [2024-12-10 22:50:36.173009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abaf70 (9): Bad file descriptor 00:19:28.535 [2024-12-10 22:50:36.174009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:28.535 [2024-12-10 22:50:36.174029] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:28.535 [2024-12-10 22:50:36.174058] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:28.535 [2024-12-10 22:50:36.174072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:28.535 request: 00:19:28.535 { 00:19:28.535 "name": "TLSTEST", 00:19:28.535 "trtype": "tcp", 00:19:28.535 "traddr": "10.0.0.2", 00:19:28.535 "adrfam": "ipv4", 00:19:28.535 "trsvcid": "4420", 00:19:28.535 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:28.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.535 "prchk_reftag": false, 00:19:28.535 "prchk_guard": false, 00:19:28.535 "hdgst": false, 00:19:28.535 "ddgst": false, 00:19:28.535 "psk": "key0", 00:19:28.535 "allow_unrecognized_csi": false, 00:19:28.535 "method": "bdev_nvme_attach_controller", 00:19:28.535 "req_id": 1 00:19:28.535 } 00:19:28.535 Got JSON-RPC error response 00:19:28.535 response: 00:19:28.535 { 00:19:28.535 "code": -5, 00:19:28.535 "message": "Input/output error" 00:19:28.535 } 00:19:28.535 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86542 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86542 ']' 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86542 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86542 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86542' 00:19:28.536 killing process with pid 86542 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86542 00:19:28.536 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.536 00:19:28.536 Latency(us) 00:19:28.536 [2024-12-10T21:50:36.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.536 [2024-12-10T21:50:36.268Z] =================================================================================================================== 00:19:28.536 [2024-12-10T21:50:36.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.536 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86542 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86679 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86679 /var/tmp/bdevperf.sock 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86679 ']' 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.794 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.794 [2024-12-10 22:50:36.506301] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:28.794 [2024-12-10 22:50:36.506383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86679 ] 00:19:29.051 [2024-12-10 22:50:36.574938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.051 [2024-12-10 22:50:36.632865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.051 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.051 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.051 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:29.309 [2024-12-10 22:50:36.999786] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:29.309 [2024-12-10 22:50:36.999845] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:29.309 request: 00:19:29.309 { 00:19:29.309 "name": "key0", 00:19:29.309 "path": "", 00:19:29.309 "method": "keyring_file_add_key", 00:19:29.309 "req_id": 1 00:19:29.309 } 00:19:29.309 Got JSON-RPC error response 00:19:29.309 response: 00:19:29.309 { 00:19:29.309 "code": -1, 00:19:29.309 "message": "Operation not permitted" 00:19:29.309 } 00:19:29.309 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.568 [2024-12-10 22:50:37.268628] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.568 [2024-12-10 22:50:37.268694] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:29.568 request: 00:19:29.568 { 00:19:29.568 "name": "TLSTEST", 00:19:29.568 "trtype": "tcp", 00:19:29.568 "traddr": "10.0.0.2", 00:19:29.568 "adrfam": "ipv4", 00:19:29.568 "trsvcid": "4420", 00:19:29.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.568 "prchk_reftag": false, 00:19:29.568 "prchk_guard": false, 00:19:29.568 "hdgst": false, 00:19:29.568 "ddgst": false, 00:19:29.568 "psk": "key0", 00:19:29.568 "allow_unrecognized_csi": false, 00:19:29.568 "method": "bdev_nvme_attach_controller", 00:19:29.568 "req_id": 1 00:19:29.568 } 00:19:29.568 Got JSON-RPC error response 00:19:29.568 response: 00:19:29.568 { 00:19:29.568 "code": -126, 00:19:29.568 "message": "Required key not available" 00:19:29.568 } 00:19:29.568 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86679 00:19:29.568 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86679 ']' 00:19:29.568 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86679 00:19:29.568 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.568 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.568 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86679 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86679' 00:19:29.851 killing process with pid 86679 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86679 00:19:29.851 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.851 00:19:29.851 Latency(us) 00:19:29.851 [2024-12-10T21:50:37.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.851 [2024-12-10T21:50:37.583Z] =================================================================================================================== 00:19:29.851 [2024-12-10T21:50:37.583Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86679 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83027 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83027 ']' 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83027 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.851 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83027 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83027' 00:19:30.127 killing process with pid 83027 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83027 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83027 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:30.127 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lE7rsacguk 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lE7rsacguk 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86882 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86882 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86882 ']' 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.390 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.390 [2024-12-10 22:50:37.937217] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:30.390 [2024-12-10 22:50:37.937304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.390 [2024-12-10 22:50:38.008572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.390 [2024-12-10 22:50:38.061199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.390 [2024-12-10 22:50:38.061260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.390 [2024-12-10 22:50:38.061289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.390 [2024-12-10 22:50:38.061302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.390 [2024-12-10 22:50:38.061312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.390 [2024-12-10 22:50:38.061913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lE7rsacguk 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lE7rsacguk 00:19:30.648 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:30.906 [2024-12-10 22:50:38.448704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.906 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.163 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:31.421 [2024-12-10 22:50:38.974045] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.421 [2024-12-10 22:50:38.974297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.421 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:31.679 malloc0 00:19:31.679 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:31.936 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:19:32.194 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lE7rsacguk 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lE7rsacguk 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=87122 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 87122 /var/tmp/bdevperf.sock 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 87122 ']' 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.452 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 [2024-12-10 22:50:40.105343] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:32.452 [2024-12-10 22:50:40.105419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87122 ] 00:19:32.452 [2024-12-10 22:50:40.180565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.710 [2024-12-10 22:50:40.243391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.710 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.710 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.710 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:19:32.968 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.227 [2024-12-10 22:50:40.878720] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.227 TLSTESTn1 00:19:33.486 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:33.486 Running I/O for 10 seconds... 00:19:35.358 3560.00 IOPS, 13.91 MiB/s [2024-12-10T21:50:44.469Z] 3556.00 IOPS, 13.89 MiB/s [2024-12-10T21:50:45.404Z] 3573.67 IOPS, 13.96 MiB/s [2024-12-10T21:50:46.344Z] 3567.00 IOPS, 13.93 MiB/s [2024-12-10T21:50:47.282Z] 3576.60 IOPS, 13.97 MiB/s [2024-12-10T21:50:48.222Z] 3588.50 IOPS, 14.02 MiB/s [2024-12-10T21:50:49.161Z] 3589.14 IOPS, 14.02 MiB/s [2024-12-10T21:50:50.101Z] 3582.38 IOPS, 13.99 MiB/s [2024-12-10T21:50:51.480Z] 3585.89 IOPS, 14.01 MiB/s [2024-12-10T21:50:51.480Z] 3584.50 IOPS, 14.00 MiB/s 00:19:43.748 Latency(us) 00:19:43.748 [2024-12-10T21:50:51.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.748 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:43.748 Verification LBA range: start 0x0 length 0x2000 00:19:43.748 TLSTESTn1 : 10.02 3589.39 14.02 0.00 0.00 35597.85 6602.15 42719.76 00:19:43.748 [2024-12-10T21:50:51.480Z] =================================================================================================================== 00:19:43.748 [2024-12-10T21:50:51.480Z] Total : 3589.39 14.02 0.00 0.00 35597.85 6602.15 42719.76 00:19:43.748 { 00:19:43.748 "results": [ 00:19:43.748 { 00:19:43.748 "job": "TLSTESTn1", 00:19:43.748 "core_mask": "0x4", 00:19:43.748 "workload": "verify", 00:19:43.748 "status": "finished", 00:19:43.748 "verify_range": { 00:19:43.748 "start": 0, 00:19:43.748 "length": 8192 00:19:43.748 }, 00:19:43.748 "queue_depth": 128, 00:19:43.748 "io_size": 4096, 00:19:43.748 "runtime": 10.021761, 00:19:43.748 "iops": 3589.3891303135247, 00:19:43.748 "mibps": 14.021051290287206, 00:19:43.748 "io_failed": 0, 00:19:43.748 "io_timeout": 0, 00:19:43.748 "avg_latency_us": 35597.84745435751, 00:19:43.748 "min_latency_us": 6602.145185185185, 00:19:43.748 "max_latency_us": 42719.76296296297 00:19:43.748 } 00:19:43.748 ], 00:19:43.748 "core_count": 1 00:19:43.748 } 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 87122 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 87122 ']' 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 87122 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87122 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87122' 00:19:43.748 killing process with pid 87122 00:19:43.748 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 87122 00:19:43.748 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.748 00:19:43.748 Latency(us) 00:19:43.748 [2024-12-10T21:50:51.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.748 [2024-12-10T21:50:51.480Z] =================================================================================================================== 00:19:43.748 [2024-12-10T21:50:51.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 87122 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lE7rsacguk 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lE7rsacguk 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lE7rsacguk 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lE7rsacguk 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lE7rsacguk 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88449 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88449 /var/tmp/bdevperf.sock 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 88449 ']' 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.749 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.749 [2024-12-10 22:50:51.450617] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:43.749 [2024-12-10 22:50:51.450699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88449 ] 00:19:44.007 [2024-12-10 22:50:51.519056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.007 [2024-12-10 22:50:51.578599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.007 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.007 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.007 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:19:44.265 [2024-12-10 22:50:51.940935] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lE7rsacguk': 0100666 00:19:44.265 [2024-12-10 22:50:51.940979] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:44.265 request: 00:19:44.265 { 00:19:44.265 "name": "key0", 00:19:44.265 "path": "/tmp/tmp.lE7rsacguk", 00:19:44.265 "method": "keyring_file_add_key", 00:19:44.265 "req_id": 1 00:19:44.265 } 00:19:44.265 Got JSON-RPC error response 00:19:44.265 response: 00:19:44.265 { 00:19:44.265 "code": -1, 00:19:44.265 "message": "Operation not permitted" 00:19:44.265 } 00:19:44.265 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.523 [2024-12-10 22:50:52.201730] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.523 [2024-12-10 22:50:52.201799] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:44.523 request: 00:19:44.523 { 00:19:44.523 "name": "TLSTEST", 00:19:44.523 "trtype": "tcp", 00:19:44.523 "traddr": "10.0.0.2", 00:19:44.523 "adrfam": "ipv4", 00:19:44.523 "trsvcid": "4420", 00:19:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.523 "prchk_reftag": false, 00:19:44.523 "prchk_guard": false, 00:19:44.523 "hdgst": false, 00:19:44.523 "ddgst": false, 00:19:44.523 "psk": "key0", 00:19:44.523 "allow_unrecognized_csi": false, 00:19:44.523 "method": "bdev_nvme_attach_controller", 00:19:44.523 "req_id": 1 00:19:44.523 } 00:19:44.523 Got JSON-RPC error response 00:19:44.523 response: 00:19:44.523 { 00:19:44.523 "code": -126, 00:19:44.523 "message": "Required key not available" 00:19:44.523 } 00:19:44.523 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 88449 00:19:44.523 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 88449 ']' 00:19:44.523 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 88449 00:19:44.523 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:44.523 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.523 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88449 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88449' 00:19:44.782 killing process with pid 88449 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 88449 00:19:44.782 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.782 00:19:44.782 Latency(us) 00:19:44.782 [2024-12-10T21:50:52.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.782 [2024-12-10T21:50:52.514Z] =================================================================================================================== 00:19:44.782 [2024-12-10T21:50:52.514Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 88449 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 86882 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86882 ']' 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86882 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.782 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86882 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86882' 00:19:45.041 killing process with pid 86882 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86882 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86882 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=88707 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 88707 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 88707 ']' 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.041 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.299 [2024-12-10 22:50:52.812251] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:45.299 [2024-12-10 22:50:52.812346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.299 [2024-12-10 22:50:52.885511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.299 [2024-12-10 22:50:52.940430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.299 [2024-12-10 22:50:52.940493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.299 [2024-12-10 22:50:52.940520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.299 [2024-12-10 22:50:52.940530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.299 [2024-12-10 22:50:52.940539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.299 [2024-12-10 22:50:52.941149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lE7rsacguk 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lE7rsacguk 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.lE7rsacguk 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lE7rsacguk 00:19:45.557 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.817 [2024-12-10 22:50:53.328154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.817 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:46.075 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.335 [2024-12-10 22:50:53.873674] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.335 [2024-12-10 22:50:53.873947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.335 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.595 malloc0 00:19:46.595 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.855 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:19:47.113 [2024-12-10 22:50:54.698402] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lE7rsacguk': 0100666 00:19:47.113 [2024-12-10 22:50:54.698451] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:47.113 request: 00:19:47.113 { 00:19:47.113 "name": "key0", 00:19:47.113 "path": "/tmp/tmp.lE7rsacguk", 00:19:47.113 "method": "keyring_file_add_key", 00:19:47.113 "req_id": 1 00:19:47.113 } 00:19:47.113 Got JSON-RPC error response 00:19:47.113 response: 00:19:47.113 { 00:19:47.113 "code": -1, 00:19:47.113 "message": "Operation not permitted" 00:19:47.113 } 00:19:47.113 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.373 [2024-12-10 22:50:55.019297] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:47.373 [2024-12-10 22:50:55.019375] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:47.373 request: 00:19:47.373 { 00:19:47.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.373 "host": "nqn.2016-06.io.spdk:host1", 00:19:47.373 "psk": "key0", 00:19:47.373 "method": "nvmf_subsystem_add_host", 00:19:47.373 "req_id": 1 00:19:47.373 } 00:19:47.373 Got JSON-RPC error response 00:19:47.373 response: 00:19:47.373 { 00:19:47.373 "code": -32603, 00:19:47.373 "message": "Internal error" 00:19:47.373 } 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 88707 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 88707 ']' 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 88707 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88707 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88707' 00:19:47.373 killing process with pid 88707 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 88707 00:19:47.373 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 88707 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lE7rsacguk 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89007 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89007 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89007 ']' 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.631 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.891 [2024-12-10 22:50:55.378843] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:47.891 [2024-12-10 22:50:55.378941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.891 [2024-12-10 22:50:55.451816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.891 [2024-12-10 22:50:55.507409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.891 [2024-12-10 22:50:55.507477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.891 [2024-12-10 22:50:55.507491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.891 [2024-12-10 22:50:55.507502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.891 [2024-12-10 22:50:55.507512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.891 [2024-12-10 22:50:55.508105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lE7rsacguk 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lE7rsacguk 00:19:48.151 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:48.409 [2024-12-10 22:50:55.955332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.409 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:48.667 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:48.925 [2024-12-10 22:50:56.528942] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.925 [2024-12-10 22:50:56.529195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.925 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.182 malloc0 00:19:49.182 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.440 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:19:49.698 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=89298 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 89298 /var/tmp/bdevperf.sock 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89298 ']' 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.956 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.956 [2024-12-10 22:50:57.663006] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:49.956 [2024-12-10 22:50:57.663104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89298 ] 00:19:50.214 [2024-12-10 22:50:57.732403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.214 [2024-12-10 22:50:57.789566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.214 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.214 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.214 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:19:50.472 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.731 [2024-12-10 22:50:58.421229] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.990 TLSTESTn1 00:19:50.990 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:51.248 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:51.248 "subsystems": [ 00:19:51.248 { 00:19:51.248 "subsystem": "keyring", 00:19:51.248 "config": [ 00:19:51.248 { 00:19:51.248 "method": "keyring_file_add_key", 00:19:51.248 "params": { 00:19:51.248 "name": "key0", 00:19:51.248 "path": "/tmp/tmp.lE7rsacguk" 00:19:51.248 } 00:19:51.248 } 00:19:51.248 ] 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "subsystem": "iobuf", 00:19:51.248 "config": [ 00:19:51.248 { 00:19:51.248 "method": "iobuf_set_options", 00:19:51.248 "params": { 00:19:51.248 "small_pool_count": 8192, 00:19:51.248 "large_pool_count": 1024, 00:19:51.248 "small_bufsize": 8192, 00:19:51.248 "large_bufsize": 135168, 00:19:51.248 "enable_numa": false 00:19:51.248 } 00:19:51.248 } 00:19:51.248 ] 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "subsystem": "sock", 00:19:51.248 "config": [ 00:19:51.248 { 00:19:51.248 "method": "sock_set_default_impl", 00:19:51.248 "params": { 00:19:51.248 "impl_name": "posix" 00:19:51.248 } 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "method": "sock_impl_set_options", 00:19:51.248 "params": { 00:19:51.248 "impl_name": "ssl", 00:19:51.248 "recv_buf_size": 4096, 00:19:51.248 "send_buf_size": 4096, 00:19:51.248 "enable_recv_pipe": true, 00:19:51.248 "enable_quickack": false, 00:19:51.248 "enable_placement_id": 0, 00:19:51.248 "enable_zerocopy_send_server": true, 00:19:51.248 "enable_zerocopy_send_client": false, 00:19:51.248 "zerocopy_threshold": 0, 00:19:51.248 "tls_version": 0, 00:19:51.248 "enable_ktls": false 00:19:51.248 } 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "method": "sock_impl_set_options", 00:19:51.248 "params": { 00:19:51.248 "impl_name": "posix", 00:19:51.248 "recv_buf_size": 2097152, 00:19:51.248 "send_buf_size": 2097152, 00:19:51.248 "enable_recv_pipe": true, 00:19:51.248 "enable_quickack": false, 00:19:51.248 "enable_placement_id": 0, 00:19:51.248 "enable_zerocopy_send_server": true, 00:19:51.248 "enable_zerocopy_send_client": false, 00:19:51.248 "zerocopy_threshold": 0, 00:19:51.248 "tls_version": 0, 00:19:51.248 "enable_ktls": false 00:19:51.248 } 00:19:51.248 } 00:19:51.248 ] 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "subsystem": "vmd", 00:19:51.248 "config": [] 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "subsystem": "accel", 00:19:51.248 "config": [ 00:19:51.248 { 00:19:51.248 "method": "accel_set_options", 00:19:51.248 "params": { 00:19:51.248 "small_cache_size": 128, 00:19:51.248 "large_cache_size": 16, 00:19:51.248 "task_count": 2048, 00:19:51.248 "sequence_count": 2048, 00:19:51.248 "buf_count": 2048 00:19:51.248 } 00:19:51.248 } 00:19:51.248 ] 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "subsystem": "bdev", 00:19:51.248 "config": [ 00:19:51.248 { 00:19:51.248 "method": "bdev_set_options", 00:19:51.248 "params": { 00:19:51.248 "bdev_io_pool_size": 65535, 00:19:51.248 "bdev_io_cache_size": 256, 00:19:51.248 "bdev_auto_examine": true, 00:19:51.248 "iobuf_small_cache_size": 128, 00:19:51.248 "iobuf_large_cache_size": 16 00:19:51.248 } 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "method": "bdev_raid_set_options", 00:19:51.248 "params": { 00:19:51.248 "process_window_size_kb": 1024, 00:19:51.248 "process_max_bandwidth_mb_sec": 0 00:19:51.248 } 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "method": "bdev_iscsi_set_options", 00:19:51.248 "params": { 00:19:51.248 "timeout_sec": 30 00:19:51.248 } 00:19:51.248 }, 00:19:51.248 { 00:19:51.248 "method": "bdev_nvme_set_options", 00:19:51.248 "params": { 00:19:51.248 "action_on_timeout": "none", 00:19:51.248 "timeout_us": 0, 00:19:51.248 "timeout_admin_us": 0, 00:19:51.248 "keep_alive_timeout_ms": 10000, 00:19:51.248 "arbitration_burst": 0, 00:19:51.248 "low_priority_weight": 0, 00:19:51.248 "medium_priority_weight": 0, 00:19:51.249 "high_priority_weight": 0, 00:19:51.249 "nvme_adminq_poll_period_us": 10000, 00:19:51.249 "nvme_ioq_poll_period_us": 0, 00:19:51.249 "io_queue_requests": 0, 00:19:51.249 "delay_cmd_submit": true, 00:19:51.249 "transport_retry_count": 4, 00:19:51.249 "bdev_retry_count": 3, 00:19:51.249 "transport_ack_timeout": 0, 00:19:51.249 "ctrlr_loss_timeout_sec": 0, 00:19:51.249 "reconnect_delay_sec": 0, 00:19:51.249 "fast_io_fail_timeout_sec": 0, 00:19:51.249 "disable_auto_failback": false, 00:19:51.249 "generate_uuids": false, 00:19:51.249 "transport_tos": 0, 00:19:51.249 "nvme_error_stat": false, 00:19:51.249 "rdma_srq_size": 0, 00:19:51.249 "io_path_stat": false, 00:19:51.249 "allow_accel_sequence": false, 00:19:51.249 "rdma_max_cq_size": 0, 00:19:51.249 "rdma_cm_event_timeout_ms": 0, 00:19:51.249 "dhchap_digests": [ 00:19:51.249 "sha256", 00:19:51.249 "sha384", 00:19:51.249 "sha512" 00:19:51.249 ], 00:19:51.249 "dhchap_dhgroups": [ 00:19:51.249 "null", 00:19:51.249 "ffdhe2048", 00:19:51.249 "ffdhe3072", 00:19:51.249 "ffdhe4096", 00:19:51.249 "ffdhe6144", 00:19:51.249 "ffdhe8192" 00:19:51.249 ], 00:19:51.249 "rdma_umr_per_io": false 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "bdev_nvme_set_hotplug", 00:19:51.249 "params": { 00:19:51.249 "period_us": 100000, 00:19:51.249 "enable": false 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "bdev_malloc_create", 00:19:51.249 "params": { 00:19:51.249 "name": "malloc0", 00:19:51.249 "num_blocks": 8192, 00:19:51.249 "block_size": 4096, 00:19:51.249 "physical_block_size": 4096, 00:19:51.249 "uuid": "a1a71c40-a00c-4332-ae87-78db3625b166", 00:19:51.249 "optimal_io_boundary": 0, 00:19:51.249 "md_size": 0, 00:19:51.249 "dif_type": 0, 00:19:51.249 "dif_is_head_of_md": false, 00:19:51.249 "dif_pi_format": 0 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "bdev_wait_for_examine" 00:19:51.249 } 00:19:51.249 ] 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "subsystem": "nbd", 00:19:51.249 "config": [] 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "subsystem": "scheduler", 00:19:51.249 "config": [ 00:19:51.249 { 00:19:51.249 "method": "framework_set_scheduler", 00:19:51.249 "params": { 00:19:51.249 "name": "static" 00:19:51.249 } 00:19:51.249 } 00:19:51.249 ] 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "subsystem": "nvmf", 00:19:51.249 "config": [ 00:19:51.249 { 00:19:51.249 "method": "nvmf_set_config", 00:19:51.249 "params": { 00:19:51.249 "discovery_filter": "match_any", 00:19:51.249 "admin_cmd_passthru": { 00:19:51.249 "identify_ctrlr": false 00:19:51.249 }, 00:19:51.249 "dhchap_digests": [ 00:19:51.249 "sha256", 00:19:51.249 "sha384", 00:19:51.249 "sha512" 00:19:51.249 ], 00:19:51.249 "dhchap_dhgroups": [ 00:19:51.249 "null", 00:19:51.249 "ffdhe2048", 00:19:51.249 "ffdhe3072", 00:19:51.249 "ffdhe4096", 00:19:51.249 "ffdhe6144", 00:19:51.249 "ffdhe8192" 00:19:51.249 ] 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "nvmf_set_max_subsystems", 00:19:51.249 "params": { 00:19:51.249 "max_subsystems": 1024 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "nvmf_set_crdt", 00:19:51.249 "params": { 00:19:51.249 "crdt1": 0, 00:19:51.249 "crdt2": 0, 00:19:51.249 "crdt3": 0 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "nvmf_create_transport", 00:19:51.249 "params": { 00:19:51.249 "trtype": "TCP", 00:19:51.249 "max_queue_depth": 128, 00:19:51.249 "max_io_qpairs_per_ctrlr": 127, 00:19:51.249 "in_capsule_data_size": 4096, 00:19:51.249 "max_io_size": 131072, 00:19:51.249 "io_unit_size": 131072, 00:19:51.249 "max_aq_depth": 128, 00:19:51.249 "num_shared_buffers": 511, 00:19:51.249 "buf_cache_size": 4294967295, 00:19:51.249 "dif_insert_or_strip": false, 00:19:51.249 "zcopy": false, 00:19:51.249 "c2h_success": false, 00:19:51.249 "sock_priority": 0, 00:19:51.249 "abort_timeout_sec": 1, 00:19:51.249 "ack_timeout": 0, 00:19:51.249 "data_wr_pool_size": 0 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "nvmf_create_subsystem", 00:19:51.249 "params": { 00:19:51.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.249 "allow_any_host": false, 00:19:51.249 "serial_number": "SPDK00000000000001", 00:19:51.249 "model_number": "SPDK bdev Controller", 00:19:51.249 "max_namespaces": 10, 00:19:51.249 "min_cntlid": 1, 00:19:51.249 "max_cntlid": 65519, 00:19:51.249 "ana_reporting": false 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "nvmf_subsystem_add_host", 00:19:51.249 "params": { 00:19:51.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.249 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.249 "psk": "key0" 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "nvmf_subsystem_add_ns", 00:19:51.249 "params": { 00:19:51.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.249 "namespace": { 00:19:51.249 "nsid": 1, 00:19:51.249 "bdev_name": "malloc0", 00:19:51.249 "nguid": "A1A71C40A00C4332AE8778DB3625B166", 00:19:51.249 "uuid": "a1a71c40-a00c-4332-ae87-78db3625b166", 00:19:51.249 "no_auto_visible": false 00:19:51.249 } 00:19:51.249 } 00:19:51.249 }, 00:19:51.249 { 00:19:51.249 "method": "nvmf_subsystem_add_listener", 00:19:51.249 "params": { 00:19:51.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.249 "listen_address": { 00:19:51.249 "trtype": "TCP", 00:19:51.249 "adrfam": "IPv4", 00:19:51.249 "traddr": "10.0.0.2", 00:19:51.249 "trsvcid": "4420" 00:19:51.249 }, 00:19:51.249 "secure_channel": true 00:19:51.249 } 00:19:51.249 } 00:19:51.249 ] 00:19:51.249 } 00:19:51.249 ] 00:19:51.249 }' 00:19:51.249 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:51.508 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:51.508 "subsystems": [ 00:19:51.508 { 00:19:51.508 "subsystem": "keyring", 00:19:51.508 "config": [ 00:19:51.508 { 00:19:51.508 "method": "keyring_file_add_key", 00:19:51.508 "params": { 00:19:51.508 "name": "key0", 00:19:51.508 "path": "/tmp/tmp.lE7rsacguk" 00:19:51.508 } 00:19:51.508 } 00:19:51.508 ] 00:19:51.508 }, 00:19:51.508 { 00:19:51.508 "subsystem": "iobuf", 00:19:51.508 "config": [ 00:19:51.508 { 00:19:51.508 "method": "iobuf_set_options", 00:19:51.508 "params": { 00:19:51.508 "small_pool_count": 8192, 00:19:51.508 "large_pool_count": 1024, 00:19:51.508 "small_bufsize": 8192, 00:19:51.508 "large_bufsize": 135168, 00:19:51.508 "enable_numa": false 00:19:51.508 } 00:19:51.508 } 00:19:51.508 ] 00:19:51.508 }, 00:19:51.508 { 00:19:51.508 "subsystem": "sock", 00:19:51.509 "config": [ 00:19:51.509 { 00:19:51.509 "method": "sock_set_default_impl", 00:19:51.509 "params": { 00:19:51.509 "impl_name": "posix" 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "sock_impl_set_options", 00:19:51.509 "params": { 00:19:51.509 "impl_name": "ssl", 00:19:51.509 "recv_buf_size": 4096, 00:19:51.509 "send_buf_size": 4096, 00:19:51.509 "enable_recv_pipe": true, 00:19:51.509 "enable_quickack": false, 00:19:51.509 "enable_placement_id": 0, 00:19:51.509 "enable_zerocopy_send_server": true, 00:19:51.509 "enable_zerocopy_send_client": false, 00:19:51.509 "zerocopy_threshold": 0, 00:19:51.509 "tls_version": 0, 00:19:51.509 "enable_ktls": false 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "sock_impl_set_options", 00:19:51.509 "params": { 00:19:51.509 "impl_name": "posix", 00:19:51.509 "recv_buf_size": 2097152, 00:19:51.509 "send_buf_size": 2097152, 00:19:51.509 "enable_recv_pipe": true, 00:19:51.509 "enable_quickack": false, 00:19:51.509 "enable_placement_id": 0, 00:19:51.509 "enable_zerocopy_send_server": true, 00:19:51.509 "enable_zerocopy_send_client": false, 00:19:51.509 "zerocopy_threshold": 0, 00:19:51.509 "tls_version": 0, 00:19:51.509 "enable_ktls": false 00:19:51.509 } 00:19:51.509 } 00:19:51.509 ] 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "subsystem": "vmd", 00:19:51.509 "config": [] 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "subsystem": "accel", 00:19:51.509 "config": [ 00:19:51.509 { 00:19:51.509 "method": "accel_set_options", 00:19:51.509 "params": { 00:19:51.509 "small_cache_size": 128, 00:19:51.509 "large_cache_size": 16, 00:19:51.509 "task_count": 2048, 00:19:51.509 "sequence_count": 2048, 00:19:51.509 "buf_count": 2048 00:19:51.509 } 00:19:51.509 } 00:19:51.509 ] 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "subsystem": "bdev", 00:19:51.509 "config": [ 00:19:51.509 { 00:19:51.509 "method": "bdev_set_options", 00:19:51.509 "params": { 00:19:51.509 "bdev_io_pool_size": 65535, 00:19:51.509 "bdev_io_cache_size": 256, 00:19:51.509 "bdev_auto_examine": true, 00:19:51.509 "iobuf_small_cache_size": 128, 00:19:51.509 "iobuf_large_cache_size": 16 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "bdev_raid_set_options", 00:19:51.509 "params": { 00:19:51.509 "process_window_size_kb": 1024, 00:19:51.509 "process_max_bandwidth_mb_sec": 0 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "bdev_iscsi_set_options", 00:19:51.509 "params": { 00:19:51.509 "timeout_sec": 30 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "bdev_nvme_set_options", 00:19:51.509 "params": { 00:19:51.509 "action_on_timeout": "none", 00:19:51.509 "timeout_us": 0, 00:19:51.509 "timeout_admin_us": 0, 00:19:51.509 "keep_alive_timeout_ms": 10000, 00:19:51.509 "arbitration_burst": 0, 00:19:51.509 "low_priority_weight": 0, 00:19:51.509 "medium_priority_weight": 0, 00:19:51.509 "high_priority_weight": 0, 00:19:51.509 "nvme_adminq_poll_period_us": 10000, 00:19:51.509 "nvme_ioq_poll_period_us": 0, 00:19:51.509 "io_queue_requests": 512, 00:19:51.509 "delay_cmd_submit": true, 00:19:51.509 "transport_retry_count": 4, 00:19:51.509 "bdev_retry_count": 3, 00:19:51.509 "transport_ack_timeout": 0, 00:19:51.509 "ctrlr_loss_timeout_sec": 0, 00:19:51.509 "reconnect_delay_sec": 0, 00:19:51.509 "fast_io_fail_timeout_sec": 0, 00:19:51.509 "disable_auto_failback": false, 00:19:51.509 "generate_uuids": false, 00:19:51.509 "transport_tos": 0, 00:19:51.509 "nvme_error_stat": false, 00:19:51.509 "rdma_srq_size": 0, 00:19:51.509 "io_path_stat": false, 00:19:51.509 "allow_accel_sequence": false, 00:19:51.509 "rdma_max_cq_size": 0, 00:19:51.509 "rdma_cm_event_timeout_ms": 0, 00:19:51.509 "dhchap_digests": [ 00:19:51.509 "sha256", 00:19:51.509 "sha384", 00:19:51.509 "sha512" 00:19:51.509 ], 00:19:51.509 "dhchap_dhgroups": [ 00:19:51.509 "null", 00:19:51.509 "ffdhe2048", 00:19:51.509 "ffdhe3072", 00:19:51.509 "ffdhe4096", 00:19:51.509 "ffdhe6144", 00:19:51.509 "ffdhe8192" 00:19:51.509 ], 00:19:51.509 "rdma_umr_per_io": false 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "bdev_nvme_attach_controller", 00:19:51.509 "params": { 00:19:51.509 "name": "TLSTEST", 00:19:51.509 "trtype": "TCP", 00:19:51.509 "adrfam": "IPv4", 00:19:51.509 "traddr": "10.0.0.2", 00:19:51.509 "trsvcid": "4420", 00:19:51.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.509 "prchk_reftag": false, 00:19:51.509 "prchk_guard": false, 00:19:51.509 "ctrlr_loss_timeout_sec": 0, 00:19:51.509 "reconnect_delay_sec": 0, 00:19:51.509 "fast_io_fail_timeout_sec": 0, 00:19:51.509 "psk": "key0", 00:19:51.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.509 "hdgst": false, 00:19:51.509 "ddgst": false, 00:19:51.509 "multipath": "multipath" 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "bdev_nvme_set_hotplug", 00:19:51.509 "params": { 00:19:51.509 "period_us": 100000, 00:19:51.509 "enable": false 00:19:51.509 } 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "method": "bdev_wait_for_examine" 00:19:51.509 } 00:19:51.509 ] 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "subsystem": "nbd", 00:19:51.509 "config": [] 00:19:51.509 } 00:19:51.509 ] 00:19:51.509 }' 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 89298 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89298 ']' 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89298 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89298 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89298' 00:19:51.509 killing process with pid 89298 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89298 00:19:51.509 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.509 00:19:51.509 Latency(us) 00:19:51.509 [2024-12-10T21:50:59.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.509 [2024-12-10T21:50:59.241Z] =================================================================================================================== 00:19:51.509 [2024-12-10T21:50:59.241Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.509 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89298 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 89007 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89007 ']' 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89007 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89007 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89007' 00:19:51.768 killing process with pid 89007 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89007 00:19:51.768 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89007 00:19:52.027 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:52.027 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.027 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.027 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:52.027 "subsystems": [ 00:19:52.027 { 00:19:52.027 "subsystem": "keyring", 00:19:52.027 "config": [ 00:19:52.027 { 00:19:52.027 "method": "keyring_file_add_key", 00:19:52.027 "params": { 00:19:52.027 "name": "key0", 00:19:52.027 "path": "/tmp/tmp.lE7rsacguk" 00:19:52.027 } 00:19:52.027 } 00:19:52.027 ] 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "subsystem": "iobuf", 00:19:52.027 "config": [ 00:19:52.027 { 00:19:52.027 "method": "iobuf_set_options", 00:19:52.027 "params": { 00:19:52.027 "small_pool_count": 8192, 00:19:52.027 "large_pool_count": 1024, 00:19:52.027 "small_bufsize": 8192, 00:19:52.027 "large_bufsize": 135168, 00:19:52.027 "enable_numa": false 00:19:52.027 } 00:19:52.027 } 00:19:52.027 ] 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "subsystem": "sock", 00:19:52.027 "config": [ 00:19:52.027 { 00:19:52.027 "method": "sock_set_default_impl", 00:19:52.027 "params": { 00:19:52.027 "impl_name": "posix" 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "sock_impl_set_options", 00:19:52.027 "params": { 00:19:52.027 "impl_name": "ssl", 00:19:52.027 "recv_buf_size": 4096, 00:19:52.027 "send_buf_size": 4096, 00:19:52.027 "enable_recv_pipe": true, 00:19:52.027 "enable_quickack": false, 00:19:52.027 "enable_placement_id": 0, 00:19:52.027 "enable_zerocopy_send_server": true, 00:19:52.027 "enable_zerocopy_send_client": false, 00:19:52.027 "zerocopy_threshold": 0, 00:19:52.027 "tls_version": 0, 00:19:52.027 "enable_ktls": false 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "sock_impl_set_options", 00:19:52.027 "params": { 00:19:52.027 "impl_name": "posix", 00:19:52.027 "recv_buf_size": 2097152, 00:19:52.027 "send_buf_size": 2097152, 00:19:52.027 "enable_recv_pipe": true, 00:19:52.027 "enable_quickack": false, 00:19:52.027 "enable_placement_id": 0, 00:19:52.027 "enable_zerocopy_send_server": true, 00:19:52.027 "enable_zerocopy_send_client": false, 00:19:52.027 "zerocopy_threshold": 0, 00:19:52.027 "tls_version": 0, 00:19:52.027 "enable_ktls": false 00:19:52.027 } 00:19:52.027 } 00:19:52.027 ] 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "subsystem": "vmd", 00:19:52.027 "config": [] 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "subsystem": "accel", 00:19:52.027 "config": [ 00:19:52.027 { 00:19:52.027 "method": "accel_set_options", 00:19:52.027 "params": { 00:19:52.027 "small_cache_size": 128, 00:19:52.027 "large_cache_size": 16, 00:19:52.027 "task_count": 2048, 00:19:52.027 "sequence_count": 2048, 00:19:52.027 "buf_count": 2048 00:19:52.027 } 00:19:52.027 } 00:19:52.027 ] 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "subsystem": "bdev", 00:19:52.027 "config": [ 00:19:52.027 { 00:19:52.027 "method": "bdev_set_options", 00:19:52.027 "params": { 00:19:52.027 "bdev_io_pool_size": 65535, 00:19:52.027 "bdev_io_cache_size": 256, 00:19:52.027 "bdev_auto_examine": true, 00:19:52.027 "iobuf_small_cache_size": 128, 00:19:52.027 "iobuf_large_cache_size": 16 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "bdev_raid_set_options", 00:19:52.027 "params": { 00:19:52.027 "process_window_size_kb": 1024, 00:19:52.027 "process_max_bandwidth_mb_sec": 0 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "bdev_iscsi_set_options", 00:19:52.027 "params": { 00:19:52.027 "timeout_sec": 30 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "bdev_nvme_set_options", 00:19:52.027 "params": { 00:19:52.027 "action_on_timeout": "none", 00:19:52.027 "timeout_us": 0, 00:19:52.027 "timeout_admin_us": 0, 00:19:52.027 "keep_alive_timeout_ms": 10000, 00:19:52.027 "arbitration_burst": 0, 00:19:52.027 "low_priority_weight": 0, 00:19:52.027 "medium_priority_weight": 0, 00:19:52.027 "high_priority_weight": 0, 00:19:52.027 "nvme_adminq_poll_period_us": 10000, 00:19:52.027 "nvme_ioq_poll_period_us": 0, 00:19:52.027 "io_queue_requests": 0, 00:19:52.027 "delay_cmd_submit": true, 00:19:52.027 "transport_retry_count": 4, 00:19:52.027 "bdev_retry_count": 3, 00:19:52.027 "transport_ack_timeout": 0, 00:19:52.027 "ctrlr_loss_timeout_sec": 0, 00:19:52.027 "reconnect_delay_sec": 0, 00:19:52.027 "fast_io_fail_timeout_sec": 0, 00:19:52.027 "disable_auto_failback": false, 00:19:52.027 "generate_uuids": false, 00:19:52.027 "transport_tos": 0, 00:19:52.027 "nvme_error_stat": false, 00:19:52.027 "rdma_srq_size": 0, 00:19:52.027 "io_path_stat": false, 00:19:52.027 "allow_accel_sequence": false, 00:19:52.027 "rdma_max_cq_size": 0, 00:19:52.027 "rdma_cm_event_timeout_ms": 0, 00:19:52.027 "dhchap_digests": [ 00:19:52.027 "sha256", 00:19:52.027 "sha384", 00:19:52.027 "sha512" 00:19:52.027 ], 00:19:52.027 "dhchap_dhgroups": [ 00:19:52.027 "null", 00:19:52.027 "ffdhe2048", 00:19:52.027 "ffdhe3072", 00:19:52.027 "ffdhe4096", 00:19:52.027 "ffdhe6144", 00:19:52.027 "ffdhe8192" 00:19:52.027 ], 00:19:52.027 "rdma_umr_per_io": false 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "bdev_nvme_set_hotplug", 00:19:52.027 "params": { 00:19:52.027 "period_us": 100000, 00:19:52.027 "enable": false 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "bdev_malloc_create", 00:19:52.027 "params": { 00:19:52.027 "name": "malloc0", 00:19:52.027 "num_blocks": 8192, 00:19:52.027 "block_size": 4096, 00:19:52.027 "physical_block_size": 4096, 00:19:52.027 "uuid": "a1a71c40-a00c-4332-ae87-78db3625b166", 00:19:52.027 "optimal_io_boundary": 0, 00:19:52.027 "md_size": 0, 00:19:52.027 "dif_type": 0, 00:19:52.027 "dif_is_head_of_md": false, 00:19:52.027 "dif_pi_format": 0 00:19:52.027 } 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "method": "bdev_wait_for_examine" 00:19:52.027 } 00:19:52.027 ] 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "subsystem": "nbd", 00:19:52.027 "config": [] 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "subsystem": "scheduler", 00:19:52.027 "config": [ 00:19:52.027 { 00:19:52.028 "method": "framework_set_scheduler", 00:19:52.028 "params": { 00:19:52.028 "name": "static" 00:19:52.028 } 00:19:52.028 } 00:19:52.028 ] 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "subsystem": "nvmf", 00:19:52.028 "config": [ 00:19:52.028 { 00:19:52.028 "method": "nvmf_set_config", 00:19:52.028 "params": { 00:19:52.028 "discovery_filter": "match_any", 00:19:52.028 "admin_cmd_passthru": { 00:19:52.028 "identify_ctrlr": false 00:19:52.028 }, 00:19:52.028 "dhchap_digests": [ 00:19:52.028 "sha256", 00:19:52.028 "sha384", 00:19:52.028 "sha512" 00:19:52.028 ], 00:19:52.028 "dhchap_dhgroups": [ 00:19:52.028 "null", 00:19:52.028 "ffdhe2048", 00:19:52.028 "ffdhe3072", 00:19:52.028 "ffdhe4096", 00:19:52.028 "ffdhe6144", 00:19:52.028 "ffdhe8192" 00:19:52.028 ] 00:19:52.028 } 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "method": "nvmf_set_max_subsystems", 00:19:52.028 "params": { 00:19:52.028 "max_subsystems": 1024 00:19:52.028 } 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "method": "nvmf_set_crdt", 00:19:52.028 "params": { 00:19:52.028 "crdt1": 0, 00:19:52.028 "crdt2": 0, 00:19:52.028 "crdt3": 0 00:19:52.028 } 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "method": "nvmf_create_transport", 00:19:52.028 "params": { 00:19:52.028 "trtype": "TCP", 00:19:52.028 "max_queue_depth": 128, 00:19:52.028 "max_io_qpairs_per_ctrlr": 127, 00:19:52.028 "in_capsule_data_size": 4096, 00:19:52.028 "max_io_size": 131072, 00:19:52.028 "io_unit_size": 131072, 00:19:52.028 "max_aq_depth": 128, 00:19:52.028 "num_shared_buffers": 511, 00:19:52.028 "buf_cache_size": 4294967295, 00:19:52.028 "dif_insert_or_strip": false, 00:19:52.028 "zcopy": false, 00:19:52.028 "c2h_success": false, 00:19:52.028 "sock_priority": 0, 00:19:52.028 "abort_timeout_sec": 1, 00:19:52.028 "ack_timeout": 0, 00:19:52.028 "data_wr_pool_size": 0 00:19:52.028 } 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "method": "nvmf_create_subsystem", 00:19:52.028 "params": { 00:19:52.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.028 "allow_any_host": false, 00:19:52.028 "serial_number": "SPDK00000000000001", 00:19:52.028 "model_number": "SPDK bdev Controller", 00:19:52.028 "max_namespaces": 10, 00:19:52.028 "min_cntlid": 1, 00:19:52.028 "max_cntlid": 65519, 00:19:52.028 "ana_reporting": false 00:19:52.028 } 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "method": "nvmf_subsystem_add_host", 00:19:52.028 "params": { 00:19:52.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.028 "host": "nqn.2016-06.io.spdk:host1", 00:19:52.028 "psk": "key0" 00:19:52.028 } 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "method": "nvmf_subsystem_add_ns", 00:19:52.028 "params": { 00:19:52.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.028 "namespace": { 00:19:52.028 "nsid": 1, 00:19:52.028 "bdev_name": "malloc0", 00:19:52.028 "nguid": "A1A71C40A00C4332AE8778DB3625B166", 00:19:52.028 "uuid": "a1a71c40-a00c-4332-ae87-78db3625b166", 00:19:52.028 "no_auto_visible": false 00:19:52.028 } 00:19:52.028 } 00:19:52.028 }, 00:19:52.028 { 00:19:52.028 "method": "nvmf_subsystem_add_listener", 00:19:52.028 "params": { 00:19:52.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.028 "listen_address": { 00:19:52.028 "trtype": "TCP", 00:19:52.028 "adrfam": "IPv4", 00:19:52.028 "traddr": "10.0.0.2", 00:19:52.028 "trsvcid": "4420" 00:19:52.028 }, 00:19:52.028 "secure_channel": true 00:19:52.028 } 00:19:52.028 } 00:19:52.028 ] 00:19:52.028 } 00:19:52.028 ] 00:19:52.028 }' 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89579 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89579 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89579 ']' 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.028 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.286 [2024-12-10 22:50:59.773784] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:52.286 [2024-12-10 22:50:59.773893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.286 [2024-12-10 22:50:59.845191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.286 [2024-12-10 22:50:59.901115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.286 [2024-12-10 22:50:59.901173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.286 [2024-12-10 22:50:59.901193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.286 [2024-12-10 22:50:59.901204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.287 [2024-12-10 22:50:59.901212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.287 [2024-12-10 22:50:59.901889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.546 [2024-12-10 22:51:00.144989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.546 [2024-12-10 22:51:00.177021] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.546 [2024-12-10 22:51:00.177265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=89781 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 89781 /var/tmp/bdevperf.sock 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89781 ']' 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.113 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:53.113 "subsystems": [ 00:19:53.113 { 00:19:53.113 "subsystem": "keyring", 00:19:53.113 "config": [ 00:19:53.113 { 00:19:53.113 "method": "keyring_file_add_key", 00:19:53.113 "params": { 00:19:53.113 "name": "key0", 00:19:53.113 "path": "/tmp/tmp.lE7rsacguk" 00:19:53.113 } 00:19:53.113 } 00:19:53.113 ] 00:19:53.113 }, 00:19:53.113 { 00:19:53.113 "subsystem": "iobuf", 00:19:53.113 "config": [ 00:19:53.113 { 00:19:53.113 "method": "iobuf_set_options", 00:19:53.113 "params": { 00:19:53.113 "small_pool_count": 8192, 00:19:53.113 "large_pool_count": 1024, 00:19:53.113 "small_bufsize": 8192, 00:19:53.113 "large_bufsize": 135168, 00:19:53.113 "enable_numa": false 00:19:53.113 } 00:19:53.113 } 00:19:53.113 ] 00:19:53.113 }, 00:19:53.113 { 00:19:53.113 "subsystem": "sock", 00:19:53.113 "config": [ 00:19:53.113 { 00:19:53.113 "method": "sock_set_default_impl", 00:19:53.113 "params": { 00:19:53.113 "impl_name": "posix" 00:19:53.113 } 00:19:53.113 }, 00:19:53.113 { 00:19:53.113 "method": "sock_impl_set_options", 00:19:53.113 "params": { 00:19:53.113 "impl_name": "ssl", 00:19:53.113 "recv_buf_size": 4096, 00:19:53.113 "send_buf_size": 4096, 00:19:53.113 "enable_recv_pipe": true, 00:19:53.113 "enable_quickack": false, 00:19:53.113 "enable_placement_id": 0, 00:19:53.113 "enable_zerocopy_send_server": true, 00:19:53.113 "enable_zerocopy_send_client": false, 00:19:53.113 "zerocopy_threshold": 0, 00:19:53.113 "tls_version": 0, 00:19:53.113 "enable_ktls": false 00:19:53.113 } 00:19:53.113 }, 00:19:53.113 { 00:19:53.113 "method": "sock_impl_set_options", 00:19:53.113 "params": { 00:19:53.113 "impl_name": "posix", 00:19:53.113 "recv_buf_size": 2097152, 00:19:53.113 "send_buf_size": 2097152, 00:19:53.113 "enable_recv_pipe": true, 00:19:53.113 "enable_quickack": false, 00:19:53.113 "enable_placement_id": 0, 00:19:53.113 "enable_zerocopy_send_server": true, 00:19:53.113 "enable_zerocopy_send_client": false, 00:19:53.113 "zerocopy_threshold": 0, 00:19:53.113 "tls_version": 0, 00:19:53.113 "enable_ktls": false 00:19:53.113 } 00:19:53.113 } 00:19:53.113 ] 00:19:53.113 }, 00:19:53.113 { 00:19:53.113 "subsystem": "vmd", 00:19:53.113 "config": [] 00:19:53.113 }, 00:19:53.113 { 00:19:53.113 "subsystem": "accel", 00:19:53.113 "config": [ 00:19:53.113 { 00:19:53.113 "method": "accel_set_options", 00:19:53.113 "params": { 00:19:53.113 "small_cache_size": 128, 00:19:53.113 "large_cache_size": 16, 00:19:53.113 "task_count": 2048, 00:19:53.113 "sequence_count": 2048, 00:19:53.113 "buf_count": 2048 00:19:53.113 } 00:19:53.113 } 00:19:53.113 ] 00:19:53.113 }, 00:19:53.113 { 00:19:53.113 "subsystem": "bdev", 00:19:53.113 "config": [ 00:19:53.113 { 00:19:53.113 "method": "bdev_set_options", 00:19:53.113 "params": { 00:19:53.113 "bdev_io_pool_size": 65535, 00:19:53.113 "bdev_io_cache_size": 256, 00:19:53.113 "bdev_auto_examine": true, 00:19:53.113 "iobuf_small_cache_size": 128, 00:19:53.113 "iobuf_large_cache_size": 16 00:19:53.113 } 00:19:53.113 }, 00:19:53.114 { 00:19:53.114 "method": "bdev_raid_set_options", 00:19:53.114 "params": { 00:19:53.114 "process_window_size_kb": 1024, 00:19:53.114 "process_max_bandwidth_mb_sec": 0 00:19:53.114 } 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "method": "bdev_iscsi_set_options", 00:19:53.114 "params": { 00:19:53.114 "timeout_sec": 30 00:19:53.114 } 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "method": "bdev_nvme_set_options", 00:19:53.114 "params": { 00:19:53.114 "action_on_timeout": "none", 00:19:53.114 "timeout_us": 0, 00:19:53.114 "timeout_admin_us": 0, 00:19:53.114 "keep_alive_timeout_ms": 10000, 00:19:53.114 "arbitration_burst": 0, 00:19:53.114 "low_priority_weight": 0, 00:19:53.114 "medium_priority_weight": 0, 00:19:53.114 "high_priority_weight": 0, 00:19:53.114 "nvme_adminq_poll_period_us": 10000, 00:19:53.114 "nvme_ioq_poll_period_us": 0, 00:19:53.114 "io_queue_requests": 512, 00:19:53.114 "delay_cmd_submit": true, 00:19:53.114 "transport_retry_count": 4, 00:19:53.114 "bdev_retry_count": 3, 00:19:53.114 "transport_ack_timeout": 0, 00:19:53.114 "ctrlr_loss_timeout_sec": 0, 00:19:53.114 "reconnect_delay_sec": 0, 00:19:53.114 "fast_io_fail_timeout_sec": 0, 00:19:53.114 "disable_auto_failback": false, 00:19:53.114 "generate_uuids": false, 00:19:53.114 "transport_tos": 0, 00:19:53.114 "nvme_error_stat": false, 00:19:53.114 "rdma_srq_size": 0, 00:19:53.114 "io_path_stat": false, 00:19:53.114 "allow_accel_sequence": false, 00:19:53.114 "rdma_max_cq_size": 0, 00:19:53.114 "rdma_cm_event_timeout_ms": 0, 00:19:53.114 "dhchap_digests": [ 00:19:53.114 "sha256", 00:19:53.114 "sha384", 00:19:53.114 "sha512" 00:19:53.114 ], 00:19:53.114 "dhchap_dhgroups": [ 00:19:53.114 "null", 00:19:53.114 "ffdhe2048", 00:19:53.114 "ffdhe3072", 00:19:53.114 "ffdhe4096", 00:19:53.114 "ffdhe6144", 00:19:53.114 "ffdhe8192" 00:19:53.114 ], 00:19:53.114 "rdma_umr_per_io": false 00:19:53.114 } 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "method": "bdev_nvme_attach_controller", 00:19:53.114 "params": { 00:19:53.114 "name": "TLSTEST", 00:19:53.114 "trtype": "TCP", 00:19:53.114 "adrfam": "IPv4", 00:19:53.114 "traddr": "10.0.0.2", 00:19:53.114 "trsvcid": "4420", 00:19:53.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.114 "prchk_reftag": false, 00:19:53.114 "prchk_guard": false, 00:19:53.114 "ctrlr_loss_timeout_sec": 0, 00:19:53.114 "reconnect_delay_sec": 0, 00:19:53.114 "fast_io_fail_timeout_sec": 0, 00:19:53.114 "psk": "key0", 00:19:53.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.114 "hdgst": false, 00:19:53.114 "ddgst": false, 00:19:53.114 "multipath": "multipath" 00:19:53.114 } 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "method": "bdev_nvme_set_hotplug", 00:19:53.114 "params": { 00:19:53.114 "period_us": 100000, 00:19:53.114 "enable": false 00:19:53.114 } 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "method": "bdev_wait_for_examine" 00:19:53.114 } 00:19:53.114 ] 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "subsystem": "nbd", 00:19:53.114 "config": [] 00:19:53.114 } 00:19:53.114 ] 00:19:53.114 }' 00:19:53.114 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.114 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.114 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.373 [2024-12-10 22:51:00.849876] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:53.373 [2024-12-10 22:51:00.849972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89781 ] 00:19:53.373 [2024-12-10 22:51:00.917223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.373 [2024-12-10 22:51:00.976275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.631 [2024-12-10 22:51:01.153160] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.631 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.631 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.631 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:53.890 Running I/O for 10 seconds... 00:19:55.820 3141.00 IOPS, 12.27 MiB/s [2024-12-10T21:51:04.485Z] 3245.50 IOPS, 12.68 MiB/s [2024-12-10T21:51:05.419Z] 3240.00 IOPS, 12.66 MiB/s [2024-12-10T21:51:06.794Z] 3219.50 IOPS, 12.58 MiB/s [2024-12-10T21:51:07.727Z] 3228.40 IOPS, 12.61 MiB/s [2024-12-10T21:51:08.661Z] 3245.83 IOPS, 12.68 MiB/s [2024-12-10T21:51:09.595Z] 3247.00 IOPS, 12.68 MiB/s [2024-12-10T21:51:10.529Z] 3248.62 IOPS, 12.69 MiB/s [2024-12-10T21:51:11.462Z] 3249.44 IOPS, 12.69 MiB/s [2024-12-10T21:51:11.462Z] 3248.50 IOPS, 12.69 MiB/s 00:20:03.730 Latency(us) 00:20:03.730 [2024-12-10T21:51:11.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.730 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.730 Verification LBA range: start 0x0 length 0x2000 00:20:03.730 TLSTESTn1 : 10.03 3252.83 12.71 0.00 0.00 39272.73 11456.66 37282.70 00:20:03.730 [2024-12-10T21:51:11.462Z] =================================================================================================================== 00:20:03.730 [2024-12-10T21:51:11.462Z] Total : 3252.83 12.71 0.00 0.00 39272.73 11456.66 37282.70 00:20:03.730 { 00:20:03.730 "results": [ 00:20:03.730 { 00:20:03.730 "job": "TLSTESTn1", 00:20:03.730 "core_mask": "0x4", 00:20:03.730 "workload": "verify", 00:20:03.730 "status": "finished", 00:20:03.730 "verify_range": { 00:20:03.730 "start": 0, 00:20:03.730 "length": 8192 00:20:03.730 }, 00:20:03.730 "queue_depth": 128, 00:20:03.730 "io_size": 4096, 00:20:03.730 "runtime": 10.025723, 00:20:03.730 "iops": 3252.8327383471496, 00:20:03.730 "mibps": 12.706377884168553, 00:20:03.730 "io_failed": 0, 00:20:03.730 "io_timeout": 0, 00:20:03.730 "avg_latency_us": 39272.72807410133, 00:20:03.730 "min_latency_us": 11456.663703703704, 00:20:03.730 "max_latency_us": 37282.70222222222 00:20:03.730 } 00:20:03.730 ], 00:20:03.730 "core_count": 1 00:20:03.730 } 00:20:03.730 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.730 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 89781 00:20:03.730 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89781 ']' 00:20:03.730 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89781 00:20:03.730 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.730 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.730 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89781 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89781' 00:20:03.988 killing process with pid 89781 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89781 00:20:03.988 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.988 00:20:03.988 Latency(us) 00:20:03.988 [2024-12-10T21:51:11.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.988 [2024-12-10T21:51:11.720Z] =================================================================================================================== 00:20:03.988 [2024-12-10T21:51:11.720Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89781 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 89579 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89579 ']' 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89579 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.988 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89579 00:20:04.247 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.247 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.247 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89579' 00:20:04.247 killing process with pid 89579 00:20:04.247 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89579 00:20:04.247 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89579 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=91581 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 91581 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 91581 ']' 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.505 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.505 [2024-12-10 22:51:12.042038] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:04.505 [2024-12-10 22:51:12.042137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.505 [2024-12-10 22:51:12.114781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.505 [2024-12-10 22:51:12.168624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.505 [2024-12-10 22:51:12.168686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.505 [2024-12-10 22:51:12.168709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.505 [2024-12-10 22:51:12.168719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.505 [2024-12-10 22:51:12.168728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.505 [2024-12-10 22:51:12.169259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lE7rsacguk 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lE7rsacguk 00:20:04.762 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.020 [2024-12-10 22:51:12.545055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.020 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:05.277 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:05.535 [2024-12-10 22:51:13.142692] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.535 [2024-12-10 22:51:13.142980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.535 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:05.792 malloc0 00:20:05.792 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:06.049 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:20:06.306 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=91862 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 91862 /var/tmp/bdevperf.sock 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 91862 ']' 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.564 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.564 [2024-12-10 22:51:14.283115] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:06.564 [2024-12-10 22:51:14.283214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91862 ] 00:20:06.821 [2024-12-10 22:51:14.352815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.821 [2024-12-10 22:51:14.409141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.821 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.821 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.821 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:20:07.079 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.337 [2024-12-10 22:51:15.032863] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.594 nvme0n1 00:20:07.594 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.594 Running I/O for 1 seconds... 00:20:08.528 3495.00 IOPS, 13.65 MiB/s 00:20:08.528 Latency(us) 00:20:08.528 [2024-12-10T21:51:16.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.528 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:08.528 Verification LBA range: start 0x0 length 0x2000 00:20:08.528 nvme0n1 : 1.02 3545.91 13.85 0.00 0.00 35720.89 8932.31 33593.27 00:20:08.528 [2024-12-10T21:51:16.260Z] =================================================================================================================== 00:20:08.528 [2024-12-10T21:51:16.260Z] Total : 3545.91 13.85 0.00 0.00 35720.89 8932.31 33593.27 00:20:08.528 { 00:20:08.528 "results": [ 00:20:08.528 { 00:20:08.528 "job": "nvme0n1", 00:20:08.528 "core_mask": "0x2", 00:20:08.528 "workload": "verify", 00:20:08.528 "status": "finished", 00:20:08.528 "verify_range": { 00:20:08.528 "start": 0, 00:20:08.528 "length": 8192 00:20:08.528 }, 00:20:08.528 "queue_depth": 128, 00:20:08.528 "io_size": 4096, 00:20:08.528 "runtime": 1.022022, 00:20:08.528 "iops": 3545.9119275318926, 00:20:08.528 "mibps": 13.851218466921456, 00:20:08.528 "io_failed": 0, 00:20:08.528 "io_timeout": 0, 00:20:08.528 "avg_latency_us": 35720.890824952985, 00:20:08.528 "min_latency_us": 8932.314074074075, 00:20:08.528 "max_latency_us": 33593.26814814815 00:20:08.528 } 00:20:08.528 ], 00:20:08.528 "core_count": 1 00:20:08.528 } 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 91862 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 91862 ']' 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 91862 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91862 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91862' 00:20:08.786 killing process with pid 91862 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 91862 00:20:08.786 Received shutdown signal, test time was about 1.000000 seconds 00:20:08.786 00:20:08.786 Latency(us) 00:20:08.786 [2024-12-10T21:51:16.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.786 [2024-12-10T21:51:16.518Z] =================================================================================================================== 00:20:08.786 [2024-12-10T21:51:16.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.786 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 91862 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 91581 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 91581 ']' 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 91581 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91581 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91581' 00:20:09.045 killing process with pid 91581 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 91581 00:20:09.045 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 91581 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=92245 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 92245 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 92245 ']' 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.303 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.303 [2024-12-10 22:51:16.845083] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:09.303 [2024-12-10 22:51:16.845160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.303 [2024-12-10 22:51:16.916471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.303 [2024-12-10 22:51:16.968667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.303 [2024-12-10 22:51:16.968731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.303 [2024-12-10 22:51:16.968744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.303 [2024-12-10 22:51:16.968755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.303 [2024-12-10 22:51:16.968764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.303 [2024-12-10 22:51:16.969302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.561 [2024-12-10 22:51:17.157936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.561 malloc0 00:20:09.561 [2024-12-10 22:51:17.189512] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.561 [2024-12-10 22:51:17.189809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=92266 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 92266 /var/tmp/bdevperf.sock 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 92266 ']' 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.561 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.562 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.562 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.562 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.562 [2024-12-10 22:51:17.260493] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:09.562 [2024-12-10 22:51:17.260582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92266 ] 00:20:09.820 [2024-12-10 22:51:17.326751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.820 [2024-12-10 22:51:17.383436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.820 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.820 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.820 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lE7rsacguk 00:20:10.385 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:10.385 [2024-12-10 22:51:18.057601] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.643 nvme0n1 00:20:10.643 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:10.643 Running I/O for 1 seconds... 00:20:11.576 3386.00 IOPS, 13.23 MiB/s 00:20:11.576 Latency(us) 00:20:11.576 [2024-12-10T21:51:19.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.576 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.576 Verification LBA range: start 0x0 length 0x2000 00:20:11.576 nvme0n1 : 1.02 3456.44 13.50 0.00 0.00 36729.58 5849.69 27962.03 00:20:11.576 [2024-12-10T21:51:19.308Z] =================================================================================================================== 00:20:11.576 [2024-12-10T21:51:19.308Z] Total : 3456.44 13.50 0.00 0.00 36729.58 5849.69 27962.03 00:20:11.576 { 00:20:11.576 "results": [ 00:20:11.576 { 00:20:11.576 "job": "nvme0n1", 00:20:11.576 "core_mask": "0x2", 00:20:11.576 "workload": "verify", 00:20:11.576 "status": "finished", 00:20:11.576 "verify_range": { 00:20:11.576 "start": 0, 00:20:11.576 "length": 8192 00:20:11.576 }, 00:20:11.576 "queue_depth": 128, 00:20:11.576 "io_size": 4096, 00:20:11.576 "runtime": 1.016653, 00:20:11.576 "iops": 3456.4399062413627, 00:20:11.576 "mibps": 13.501718383755323, 00:20:11.576 "io_failed": 0, 00:20:11.576 "io_timeout": 0, 00:20:11.576 "avg_latency_us": 36729.582056957355, 00:20:11.576 "min_latency_us": 5849.694814814815, 00:20:11.576 "max_latency_us": 27962.02666666667 00:20:11.576 } 00:20:11.576 ], 00:20:11.576 "core_count": 1 00:20:11.576 } 00:20:11.576 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:11.576 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.576 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.834 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.834 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:11.834 "subsystems": [ 00:20:11.834 { 00:20:11.834 "subsystem": "keyring", 00:20:11.834 "config": [ 00:20:11.834 { 00:20:11.834 "method": "keyring_file_add_key", 00:20:11.834 "params": { 00:20:11.834 "name": "key0", 00:20:11.834 "path": "/tmp/tmp.lE7rsacguk" 00:20:11.834 } 00:20:11.834 } 00:20:11.834 ] 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "subsystem": "iobuf", 00:20:11.834 "config": [ 00:20:11.834 { 00:20:11.834 "method": "iobuf_set_options", 00:20:11.834 "params": { 00:20:11.834 "small_pool_count": 8192, 00:20:11.834 "large_pool_count": 1024, 00:20:11.834 "small_bufsize": 8192, 00:20:11.834 "large_bufsize": 135168, 00:20:11.834 "enable_numa": false 00:20:11.834 } 00:20:11.834 } 00:20:11.834 ] 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "subsystem": "sock", 00:20:11.834 "config": [ 00:20:11.834 { 00:20:11.834 "method": "sock_set_default_impl", 00:20:11.834 "params": { 00:20:11.834 "impl_name": "posix" 00:20:11.834 } 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "method": "sock_impl_set_options", 00:20:11.834 "params": { 00:20:11.834 "impl_name": "ssl", 00:20:11.834 "recv_buf_size": 4096, 00:20:11.834 "send_buf_size": 4096, 00:20:11.834 "enable_recv_pipe": true, 00:20:11.834 "enable_quickack": false, 00:20:11.834 "enable_placement_id": 0, 00:20:11.834 "enable_zerocopy_send_server": true, 00:20:11.834 "enable_zerocopy_send_client": false, 00:20:11.834 "zerocopy_threshold": 0, 00:20:11.834 "tls_version": 0, 00:20:11.834 "enable_ktls": false 00:20:11.834 } 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "method": "sock_impl_set_options", 00:20:11.834 "params": { 00:20:11.834 "impl_name": "posix", 00:20:11.834 "recv_buf_size": 2097152, 00:20:11.834 "send_buf_size": 2097152, 00:20:11.834 "enable_recv_pipe": true, 00:20:11.834 "enable_quickack": false, 00:20:11.834 "enable_placement_id": 0, 00:20:11.834 "enable_zerocopy_send_server": true, 00:20:11.834 "enable_zerocopy_send_client": false, 00:20:11.834 "zerocopy_threshold": 0, 00:20:11.834 "tls_version": 0, 00:20:11.834 "enable_ktls": false 00:20:11.834 } 00:20:11.834 } 00:20:11.834 ] 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "subsystem": "vmd", 00:20:11.834 "config": [] 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "subsystem": "accel", 00:20:11.834 "config": [ 00:20:11.834 { 00:20:11.834 "method": "accel_set_options", 00:20:11.834 "params": { 00:20:11.834 "small_cache_size": 128, 00:20:11.834 "large_cache_size": 16, 00:20:11.834 "task_count": 2048, 00:20:11.834 "sequence_count": 2048, 00:20:11.834 "buf_count": 2048 00:20:11.834 } 00:20:11.834 } 00:20:11.834 ] 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "subsystem": "bdev", 00:20:11.834 "config": [ 00:20:11.834 { 00:20:11.834 "method": "bdev_set_options", 00:20:11.834 "params": { 00:20:11.834 "bdev_io_pool_size": 65535, 00:20:11.834 "bdev_io_cache_size": 256, 00:20:11.834 "bdev_auto_examine": true, 00:20:11.834 "iobuf_small_cache_size": 128, 00:20:11.834 "iobuf_large_cache_size": 16 00:20:11.834 } 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "method": "bdev_raid_set_options", 00:20:11.834 "params": { 00:20:11.834 "process_window_size_kb": 1024, 00:20:11.834 "process_max_bandwidth_mb_sec": 0 00:20:11.834 } 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "method": "bdev_iscsi_set_options", 00:20:11.834 "params": { 00:20:11.834 "timeout_sec": 30 00:20:11.834 } 00:20:11.834 }, 00:20:11.834 { 00:20:11.834 "method": "bdev_nvme_set_options", 00:20:11.834 "params": { 00:20:11.834 "action_on_timeout": "none", 00:20:11.834 "timeout_us": 0, 00:20:11.834 "timeout_admin_us": 0, 00:20:11.834 "keep_alive_timeout_ms": 10000, 00:20:11.834 "arbitration_burst": 0, 00:20:11.834 "low_priority_weight": 0, 00:20:11.834 "medium_priority_weight": 0, 00:20:11.834 "high_priority_weight": 0, 00:20:11.834 "nvme_adminq_poll_period_us": 10000, 00:20:11.834 "nvme_ioq_poll_period_us": 0, 00:20:11.834 "io_queue_requests": 0, 00:20:11.834 "delay_cmd_submit": true, 00:20:11.834 "transport_retry_count": 4, 00:20:11.834 "bdev_retry_count": 3, 00:20:11.834 "transport_ack_timeout": 0, 00:20:11.834 "ctrlr_loss_timeout_sec": 0, 00:20:11.834 "reconnect_delay_sec": 0, 00:20:11.835 "fast_io_fail_timeout_sec": 0, 00:20:11.835 "disable_auto_failback": false, 00:20:11.835 "generate_uuids": false, 00:20:11.835 "transport_tos": 0, 00:20:11.835 "nvme_error_stat": false, 00:20:11.835 "rdma_srq_size": 0, 00:20:11.835 "io_path_stat": false, 00:20:11.835 "allow_accel_sequence": false, 00:20:11.835 "rdma_max_cq_size": 0, 00:20:11.835 "rdma_cm_event_timeout_ms": 0, 00:20:11.835 "dhchap_digests": [ 00:20:11.835 "sha256", 00:20:11.835 "sha384", 00:20:11.835 "sha512" 00:20:11.835 ], 00:20:11.835 "dhchap_dhgroups": [ 00:20:11.835 "null", 00:20:11.835 "ffdhe2048", 00:20:11.835 "ffdhe3072", 00:20:11.835 "ffdhe4096", 00:20:11.835 "ffdhe6144", 00:20:11.835 "ffdhe8192" 00:20:11.835 ], 00:20:11.835 "rdma_umr_per_io": false 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "bdev_nvme_set_hotplug", 00:20:11.835 "params": { 00:20:11.835 "period_us": 100000, 00:20:11.835 "enable": false 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "bdev_malloc_create", 00:20:11.835 "params": { 00:20:11.835 "name": "malloc0", 00:20:11.835 "num_blocks": 8192, 00:20:11.835 "block_size": 4096, 00:20:11.835 "physical_block_size": 4096, 00:20:11.835 "uuid": "524fce41-2d63-4eed-af4f-d0ebf3565b23", 00:20:11.835 "optimal_io_boundary": 0, 00:20:11.835 "md_size": 0, 00:20:11.835 "dif_type": 0, 00:20:11.835 "dif_is_head_of_md": false, 00:20:11.835 "dif_pi_format": 0 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "bdev_wait_for_examine" 00:20:11.835 } 00:20:11.835 ] 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "subsystem": "nbd", 00:20:11.835 "config": [] 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "subsystem": "scheduler", 00:20:11.835 "config": [ 00:20:11.835 { 00:20:11.835 "method": "framework_set_scheduler", 00:20:11.835 "params": { 00:20:11.835 "name": "static" 00:20:11.835 } 00:20:11.835 } 00:20:11.835 ] 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "subsystem": "nvmf", 00:20:11.835 "config": [ 00:20:11.835 { 00:20:11.835 "method": "nvmf_set_config", 00:20:11.835 "params": { 00:20:11.835 "discovery_filter": "match_any", 00:20:11.835 "admin_cmd_passthru": { 00:20:11.835 "identify_ctrlr": false 00:20:11.835 }, 00:20:11.835 "dhchap_digests": [ 00:20:11.835 "sha256", 00:20:11.835 "sha384", 00:20:11.835 "sha512" 00:20:11.835 ], 00:20:11.835 "dhchap_dhgroups": [ 00:20:11.835 "null", 00:20:11.835 "ffdhe2048", 00:20:11.835 "ffdhe3072", 00:20:11.835 "ffdhe4096", 00:20:11.835 "ffdhe6144", 00:20:11.835 "ffdhe8192" 00:20:11.835 ] 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "nvmf_set_max_subsystems", 00:20:11.835 "params": { 00:20:11.835 "max_subsystems": 1024 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "nvmf_set_crdt", 00:20:11.835 "params": { 00:20:11.835 "crdt1": 0, 00:20:11.835 "crdt2": 0, 00:20:11.835 "crdt3": 0 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "nvmf_create_transport", 00:20:11.835 "params": { 00:20:11.835 "trtype": "TCP", 00:20:11.835 "max_queue_depth": 128, 00:20:11.835 "max_io_qpairs_per_ctrlr": 127, 00:20:11.835 "in_capsule_data_size": 4096, 00:20:11.835 "max_io_size": 131072, 00:20:11.835 "io_unit_size": 131072, 00:20:11.835 "max_aq_depth": 128, 00:20:11.835 "num_shared_buffers": 511, 00:20:11.835 "buf_cache_size": 4294967295, 00:20:11.835 "dif_insert_or_strip": false, 00:20:11.835 "zcopy": false, 00:20:11.835 "c2h_success": false, 00:20:11.835 "sock_priority": 0, 00:20:11.835 "abort_timeout_sec": 1, 00:20:11.835 "ack_timeout": 0, 00:20:11.835 "data_wr_pool_size": 0 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "nvmf_create_subsystem", 00:20:11.835 "params": { 00:20:11.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.835 "allow_any_host": false, 00:20:11.835 "serial_number": "00000000000000000000", 00:20:11.835 "model_number": "SPDK bdev Controller", 00:20:11.835 "max_namespaces": 32, 00:20:11.835 "min_cntlid": 1, 00:20:11.835 "max_cntlid": 65519, 00:20:11.835 "ana_reporting": false 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "nvmf_subsystem_add_host", 00:20:11.835 "params": { 00:20:11.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.835 "host": "nqn.2016-06.io.spdk:host1", 00:20:11.835 "psk": "key0" 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "nvmf_subsystem_add_ns", 00:20:11.835 "params": { 00:20:11.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.835 "namespace": { 00:20:11.835 "nsid": 1, 00:20:11.835 "bdev_name": "malloc0", 00:20:11.835 "nguid": "524FCE412D634EEDAF4FD0EBF3565B23", 00:20:11.835 "uuid": "524fce41-2d63-4eed-af4f-d0ebf3565b23", 00:20:11.835 "no_auto_visible": false 00:20:11.835 } 00:20:11.835 } 00:20:11.835 }, 00:20:11.835 { 00:20:11.835 "method": "nvmf_subsystem_add_listener", 00:20:11.835 "params": { 00:20:11.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.835 "listen_address": { 00:20:11.835 "trtype": "TCP", 00:20:11.835 "adrfam": "IPv4", 00:20:11.835 "traddr": "10.0.0.2", 00:20:11.835 "trsvcid": "4420" 00:20:11.835 }, 00:20:11.835 "secure_channel": false, 00:20:11.835 "sock_impl": "ssl" 00:20:11.835 } 00:20:11.835 } 00:20:11.835 ] 00:20:11.835 } 00:20:11.835 ] 00:20:11.835 }' 00:20:11.835 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:12.094 "subsystems": [ 00:20:12.094 { 00:20:12.094 "subsystem": "keyring", 00:20:12.094 "config": [ 00:20:12.094 { 00:20:12.094 "method": "keyring_file_add_key", 00:20:12.094 "params": { 00:20:12.094 "name": "key0", 00:20:12.094 "path": "/tmp/tmp.lE7rsacguk" 00:20:12.094 } 00:20:12.094 } 00:20:12.094 ] 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "subsystem": "iobuf", 00:20:12.094 "config": [ 00:20:12.094 { 00:20:12.094 "method": "iobuf_set_options", 00:20:12.094 "params": { 00:20:12.094 "small_pool_count": 8192, 00:20:12.094 "large_pool_count": 1024, 00:20:12.094 "small_bufsize": 8192, 00:20:12.094 "large_bufsize": 135168, 00:20:12.094 "enable_numa": false 00:20:12.094 } 00:20:12.094 } 00:20:12.094 ] 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "subsystem": "sock", 00:20:12.094 "config": [ 00:20:12.094 { 00:20:12.094 "method": "sock_set_default_impl", 00:20:12.094 "params": { 00:20:12.094 "impl_name": "posix" 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "sock_impl_set_options", 00:20:12.094 "params": { 00:20:12.094 "impl_name": "ssl", 00:20:12.094 "recv_buf_size": 4096, 00:20:12.094 "send_buf_size": 4096, 00:20:12.094 "enable_recv_pipe": true, 00:20:12.094 "enable_quickack": false, 00:20:12.094 "enable_placement_id": 0, 00:20:12.094 "enable_zerocopy_send_server": true, 00:20:12.094 "enable_zerocopy_send_client": false, 00:20:12.094 "zerocopy_threshold": 0, 00:20:12.094 "tls_version": 0, 00:20:12.094 "enable_ktls": false 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "sock_impl_set_options", 00:20:12.094 "params": { 00:20:12.094 "impl_name": "posix", 00:20:12.094 "recv_buf_size": 2097152, 00:20:12.094 "send_buf_size": 2097152, 00:20:12.094 "enable_recv_pipe": true, 00:20:12.094 "enable_quickack": false, 00:20:12.094 "enable_placement_id": 0, 00:20:12.094 "enable_zerocopy_send_server": true, 00:20:12.094 "enable_zerocopy_send_client": false, 00:20:12.094 "zerocopy_threshold": 0, 00:20:12.094 "tls_version": 0, 00:20:12.094 "enable_ktls": false 00:20:12.094 } 00:20:12.094 } 00:20:12.094 ] 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "subsystem": "vmd", 00:20:12.094 "config": [] 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "subsystem": "accel", 00:20:12.094 "config": [ 00:20:12.094 { 00:20:12.094 "method": "accel_set_options", 00:20:12.094 "params": { 00:20:12.094 "small_cache_size": 128, 00:20:12.094 "large_cache_size": 16, 00:20:12.094 "task_count": 2048, 00:20:12.094 "sequence_count": 2048, 00:20:12.094 "buf_count": 2048 00:20:12.094 } 00:20:12.094 } 00:20:12.094 ] 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "subsystem": "bdev", 00:20:12.094 "config": [ 00:20:12.094 { 00:20:12.094 "method": "bdev_set_options", 00:20:12.094 "params": { 00:20:12.094 "bdev_io_pool_size": 65535, 00:20:12.094 "bdev_io_cache_size": 256, 00:20:12.094 "bdev_auto_examine": true, 00:20:12.094 "iobuf_small_cache_size": 128, 00:20:12.094 "iobuf_large_cache_size": 16 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "bdev_raid_set_options", 00:20:12.094 "params": { 00:20:12.094 "process_window_size_kb": 1024, 00:20:12.094 "process_max_bandwidth_mb_sec": 0 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "bdev_iscsi_set_options", 00:20:12.094 "params": { 00:20:12.094 "timeout_sec": 30 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "bdev_nvme_set_options", 00:20:12.094 "params": { 00:20:12.094 "action_on_timeout": "none", 00:20:12.094 "timeout_us": 0, 00:20:12.094 "timeout_admin_us": 0, 00:20:12.094 "keep_alive_timeout_ms": 10000, 00:20:12.094 "arbitration_burst": 0, 00:20:12.094 "low_priority_weight": 0, 00:20:12.094 "medium_priority_weight": 0, 00:20:12.094 "high_priority_weight": 0, 00:20:12.094 "nvme_adminq_poll_period_us": 10000, 00:20:12.094 "nvme_ioq_poll_period_us": 0, 00:20:12.094 "io_queue_requests": 512, 00:20:12.094 "delay_cmd_submit": true, 00:20:12.094 "transport_retry_count": 4, 00:20:12.094 "bdev_retry_count": 3, 00:20:12.094 "transport_ack_timeout": 0, 00:20:12.094 "ctrlr_loss_timeout_sec": 0, 00:20:12.094 "reconnect_delay_sec": 0, 00:20:12.094 "fast_io_fail_timeout_sec": 0, 00:20:12.094 "disable_auto_failback": false, 00:20:12.094 "generate_uuids": false, 00:20:12.094 "transport_tos": 0, 00:20:12.094 "nvme_error_stat": false, 00:20:12.094 "rdma_srq_size": 0, 00:20:12.094 "io_path_stat": false, 00:20:12.094 "allow_accel_sequence": false, 00:20:12.094 "rdma_max_cq_size": 0, 00:20:12.094 "rdma_cm_event_timeout_ms": 0, 00:20:12.094 "dhchap_digests": [ 00:20:12.094 "sha256", 00:20:12.094 "sha384", 00:20:12.094 "sha512" 00:20:12.094 ], 00:20:12.094 "dhchap_dhgroups": [ 00:20:12.094 "null", 00:20:12.094 "ffdhe2048", 00:20:12.094 "ffdhe3072", 00:20:12.094 "ffdhe4096", 00:20:12.094 "ffdhe6144", 00:20:12.094 "ffdhe8192" 00:20:12.094 ], 00:20:12.094 "rdma_umr_per_io": false 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "bdev_nvme_attach_controller", 00:20:12.094 "params": { 00:20:12.094 "name": "nvme0", 00:20:12.094 "trtype": "TCP", 00:20:12.094 "adrfam": "IPv4", 00:20:12.094 "traddr": "10.0.0.2", 00:20:12.094 "trsvcid": "4420", 00:20:12.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.094 "prchk_reftag": false, 00:20:12.094 "prchk_guard": false, 00:20:12.094 "ctrlr_loss_timeout_sec": 0, 00:20:12.094 "reconnect_delay_sec": 0, 00:20:12.094 "fast_io_fail_timeout_sec": 0, 00:20:12.094 "psk": "key0", 00:20:12.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.094 "hdgst": false, 00:20:12.094 "ddgst": false, 00:20:12.094 "multipath": "multipath" 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "bdev_nvme_set_hotplug", 00:20:12.094 "params": { 00:20:12.094 "period_us": 100000, 00:20:12.094 "enable": false 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "bdev_enable_histogram", 00:20:12.094 "params": { 00:20:12.094 "name": "nvme0n1", 00:20:12.094 "enable": true 00:20:12.094 } 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "method": "bdev_wait_for_examine" 00:20:12.094 } 00:20:12.094 ] 00:20:12.094 }, 00:20:12.094 { 00:20:12.094 "subsystem": "nbd", 00:20:12.094 "config": [] 00:20:12.094 } 00:20:12.094 ] 00:20:12.094 }' 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 92266 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 92266 ']' 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 92266 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92266 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92266' 00:20:12.094 killing process with pid 92266 00:20:12.094 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 92266 00:20:12.094 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.094 00:20:12.094 Latency(us) 00:20:12.094 [2024-12-10T21:51:19.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.094 [2024-12-10T21:51:19.826Z] =================================================================================================================== 00:20:12.094 [2024-12-10T21:51:19.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.095 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 92266 00:20:12.353 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 92245 00:20:12.353 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 92245 ']' 00:20:12.353 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 92245 00:20:12.353 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.353 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.353 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92245 00:20:12.353 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.353 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.353 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92245' 00:20:12.353 killing process with pid 92245 00:20:12.353 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 92245 00:20:12.353 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 92245 00:20:12.613 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:12.613 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:12.613 "subsystems": [ 00:20:12.613 { 00:20:12.613 "subsystem": "keyring", 00:20:12.613 "config": [ 00:20:12.613 { 00:20:12.613 "method": "keyring_file_add_key", 00:20:12.613 "params": { 00:20:12.613 "name": "key0", 00:20:12.613 "path": "/tmp/tmp.lE7rsacguk" 00:20:12.613 } 00:20:12.613 } 00:20:12.613 ] 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "subsystem": "iobuf", 00:20:12.613 "config": [ 00:20:12.613 { 00:20:12.613 "method": "iobuf_set_options", 00:20:12.613 "params": { 00:20:12.613 "small_pool_count": 8192, 00:20:12.613 "large_pool_count": 1024, 00:20:12.613 "small_bufsize": 8192, 00:20:12.613 "large_bufsize": 135168, 00:20:12.613 "enable_numa": false 00:20:12.613 } 00:20:12.613 } 00:20:12.613 ] 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "subsystem": "sock", 00:20:12.613 "config": [ 00:20:12.613 { 00:20:12.613 "method": "sock_set_default_impl", 00:20:12.613 "params": { 00:20:12.613 "impl_name": "posix" 00:20:12.613 } 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "method": "sock_impl_set_options", 00:20:12.613 "params": { 00:20:12.613 "impl_name": "ssl", 00:20:12.613 "recv_buf_size": 4096, 00:20:12.613 "send_buf_size": 4096, 00:20:12.613 "enable_recv_pipe": true, 00:20:12.613 "enable_quickack": false, 00:20:12.613 "enable_placement_id": 0, 00:20:12.613 "enable_zerocopy_send_server": true, 00:20:12.613 "enable_zerocopy_send_client": false, 00:20:12.613 "zerocopy_threshold": 0, 00:20:12.613 "tls_version": 0, 00:20:12.613 "enable_ktls": false 00:20:12.613 } 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "method": "sock_impl_set_options", 00:20:12.613 "params": { 00:20:12.613 "impl_name": "posix", 00:20:12.613 "recv_buf_size": 2097152, 00:20:12.613 "send_buf_size": 2097152, 00:20:12.613 "enable_recv_pipe": true, 00:20:12.613 "enable_quickack": false, 00:20:12.613 "enable_placement_id": 0, 00:20:12.613 "enable_zerocopy_send_server": true, 00:20:12.613 "enable_zerocopy_send_client": false, 00:20:12.613 "zerocopy_threshold": 0, 00:20:12.613 "tls_version": 0, 00:20:12.613 "enable_ktls": false 00:20:12.613 } 00:20:12.613 } 00:20:12.613 ] 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "subsystem": "vmd", 00:20:12.613 "config": [] 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "subsystem": "accel", 00:20:12.613 "config": [ 00:20:12.613 { 00:20:12.613 "method": "accel_set_options", 00:20:12.613 "params": { 00:20:12.613 "small_cache_size": 128, 00:20:12.613 "large_cache_size": 16, 00:20:12.613 "task_count": 2048, 00:20:12.613 "sequence_count": 2048, 00:20:12.613 "buf_count": 2048 00:20:12.613 } 00:20:12.613 } 00:20:12.613 ] 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "subsystem": "bdev", 00:20:12.613 "config": [ 00:20:12.613 { 00:20:12.613 "method": "bdev_set_options", 00:20:12.613 "params": { 00:20:12.613 "bdev_io_pool_size": 65535, 00:20:12.613 "bdev_io_cache_size": 256, 00:20:12.613 "bdev_auto_examine": true, 00:20:12.613 "iobuf_small_cache_size": 128, 00:20:12.613 "iobuf_large_cache_size": 16 00:20:12.613 } 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "method": "bdev_raid_set_options", 00:20:12.613 "params": { 00:20:12.613 "process_window_size_kb": 1024, 00:20:12.613 "process_max_bandwidth_mb_sec": 0 00:20:12.613 } 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "method": "bdev_iscsi_set_options", 00:20:12.613 "params": { 00:20:12.613 "timeout_sec": 30 00:20:12.613 } 00:20:12.613 }, 00:20:12.613 { 00:20:12.613 "method": "bdev_nvme_set_options", 00:20:12.614 "params": { 00:20:12.614 "action_on_timeout": "none", 00:20:12.614 "timeout_us": 0, 00:20:12.614 "timeout_admin_us": 0, 00:20:12.614 "keep_alive_timeout_ms": 10000, 00:20:12.614 "arbitration_burst": 0, 00:20:12.614 "low_priority_weight": 0, 00:20:12.614 "medium_priority_weight": 0, 00:20:12.614 "high_priority_weight": 0, 00:20:12.614 "nvme_adminq_poll_period_us": 10000, 00:20:12.614 "nvme_ioq_poll_period_us": 0, 00:20:12.614 "io_queue_requests": 0, 00:20:12.614 "delay_cmd_submit": true, 00:20:12.614 "transport_retry_count": 4, 00:20:12.614 "bdev_retry_count": 3, 00:20:12.614 "transport_ack_timeout": 0, 00:20:12.614 "ctrlr_loss_timeout_sec": 0, 00:20:12.614 "reconnect_delay_sec": 0, 00:20:12.614 "fast_io_fail_timeout_sec": 0, 00:20:12.614 "disable_auto_failback": false, 00:20:12.614 "generate_uuids": false, 00:20:12.614 "transport_tos": 0, 00:20:12.614 "nvme_error_stat": false, 00:20:12.614 "rdma_srq_size": 0, 00:20:12.614 "io_path_stat": false, 00:20:12.614 "allow_accel_sequence": false, 00:20:12.614 "rdma_max_cq_size": 0, 00:20:12.614 "rdma_cm_event_timeout_ms": 0, 00:20:12.614 "dhchap_digests": [ 00:20:12.614 "sha256", 00:20:12.614 "sha384", 00:20:12.614 "sha512" 00:20:12.614 ], 00:20:12.614 "dhchap_dhgroups": [ 00:20:12.614 "null", 00:20:12.614 "ffdhe2048", 00:20:12.614 "ffdhe3072", 00:20:12.614 "ffdhe4096", 00:20:12.614 "ffdhe6144", 00:20:12.614 "ffdhe8192" 00:20:12.614 ], 00:20:12.614 "rdma_umr_per_io": false 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "bdev_nvme_set_hotplug", 00:20:12.614 "params": { 00:20:12.614 "period_us": 100000, 00:20:12.614 "enable": false 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "bdev_malloc_create", 00:20:12.614 "params": { 00:20:12.614 "name": "malloc0", 00:20:12.614 "num_blocks": 8192, 00:20:12.614 "block_size": 4096, 00:20:12.614 "physical_block_size": 4096, 00:20:12.614 "uuid": "524fce41-2d63-4eed-af4f-d0ebf3565b23", 00:20:12.614 "optimal_io_boundary": 0, 00:20:12.614 "md_size": 0, 00:20:12.614 "dif_type": 0, 00:20:12.614 "dif_is_head_of_md": false, 00:20:12.614 "dif_pi_format": 0 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "bdev_wait_for_examine" 00:20:12.614 } 00:20:12.614 ] 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "subsystem": "nbd", 00:20:12.614 "config": [] 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "subsystem": "scheduler", 00:20:12.614 "config": [ 00:20:12.614 { 00:20:12.614 "method": "framework_set_scheduler", 00:20:12.614 "params": { 00:20:12.614 "name": "static" 00:20:12.614 } 00:20:12.614 } 00:20:12.614 ] 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "subsystem": "nvmf", 00:20:12.614 "config": [ 00:20:12.614 { 00:20:12.614 "method": "nvmf_set_config", 00:20:12.614 "params": { 00:20:12.614 "discovery_filter": "match_any", 00:20:12.614 "admin_cmd_passthru": { 00:20:12.614 "identify_ctrlr": false 00:20:12.614 }, 00:20:12.614 "dhchap_digests": [ 00:20:12.614 "sha256", 00:20:12.614 "sha384", 00:20:12.614 "sha512" 00:20:12.614 ], 00:20:12.614 "dhchap_dhgroups": [ 00:20:12.614 "null", 00:20:12.614 "ffdhe2048", 00:20:12.614 "ffdhe3072", 00:20:12.614 "ffdhe4096", 00:20:12.614 "ffdhe6144", 00:20:12.614 "ffdhe8192" 00:20:12.614 ] 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "nvmf_set_max_subsystems", 00:20:12.614 "params": { 00:20:12.614 "max_subsystems": 1024 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "nvmf_set_crdt", 00:20:12.614 "params": { 00:20:12.614 "crdt1": 0, 00:20:12.614 "crdt2": 0, 00:20:12.614 "crdt3": 0 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "nvmf_create_transport", 00:20:12.614 "params": { 00:20:12.614 "trtype": "TCP", 00:20:12.614 "max_queue_depth": 128, 00:20:12.614 "max_io_qpairs_per_ctrlr": 127, 00:20:12.614 "in_capsule_data_size": 4096, 00:20:12.614 "max_io_size": 131072, 00:20:12.614 "io_unit_size": 131 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:12.614 072, 00:20:12.614 "max_aq_depth": 128, 00:20:12.614 "num_shared_buffers": 511, 00:20:12.614 "buf_cache_size": 4294967295, 00:20:12.614 "dif_insert_or_strip": false, 00:20:12.614 "zcopy": false, 00:20:12.614 "c2h_success": false, 00:20:12.614 "sock_priority": 0, 00:20:12.614 "abort_timeout_sec": 1, 00:20:12.614 "ack_timeout": 0, 00:20:12.614 "data_wr_pool_size": 0 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "nvmf_create_subsystem", 00:20:12.614 "params": { 00:20:12.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.614 "allow_any_host": false, 00:20:12.614 "serial_number": "00000000000000000000", 00:20:12.614 "model_number": "SPDK bdev Controller", 00:20:12.614 "max_namespaces": 32, 00:20:12.614 "min_cntlid": 1, 00:20:12.614 "max_cntlid": 65519, 00:20:12.614 "ana_reporting": false 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "nvmf_subsystem_add_host", 00:20:12.614 "params": { 00:20:12.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.614 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.614 "psk": "key0" 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "nvmf_subsystem_add_ns", 00:20:12.614 "params": { 00:20:12.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.614 "namespace": { 00:20:12.614 "nsid": 1, 00:20:12.614 "bdev_name": "malloc0", 00:20:12.614 "nguid": "524FCE412D634EEDAF4FD0EBF3565B23", 00:20:12.614 "uuid": "524fce41-2d63-4eed-af4f-d0ebf3565b23", 00:20:12.614 "no_auto_visible": false 00:20:12.614 } 00:20:12.614 } 00:20:12.614 }, 00:20:12.614 { 00:20:12.614 "method": "nvmf_subsystem_add_listener", 00:20:12.614 "params": { 00:20:12.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.614 "listen_address": { 00:20:12.614 "trtype": "TCP", 00:20:12.614 "adrfam": "IPv4", 00:20:12.614 "traddr": "10.0.0.2", 00:20:12.614 "trsvcid": "4420" 00:20:12.614 }, 00:20:12.614 "secure_channel": false, 00:20:12.614 "sock_impl": "ssl" 00:20:12.614 } 00:20:12.614 } 00:20:12.614 ] 00:20:12.614 } 00:20:12.614 ] 00:20:12.614 }' 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=92681 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 92681 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 92681 ']' 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.614 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.614 [2024-12-10 22:51:20.310799] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:12.614 [2024-12-10 22:51:20.310922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.874 [2024-12-10 22:51:20.388200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.874 [2024-12-10 22:51:20.447925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.874 [2024-12-10 22:51:20.447991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.874 [2024-12-10 22:51:20.448004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.874 [2024-12-10 22:51:20.448031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.874 [2024-12-10 22:51:20.448041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.874 [2024-12-10 22:51:20.448729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.133 [2024-12-10 22:51:20.692502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.133 [2024-12-10 22:51:20.724562] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.133 [2024-12-10 22:51:20.724841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=92832 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 92832 /var/tmp/bdevperf.sock 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 92832 ']' 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.700 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:13.700 "subsystems": [ 00:20:13.700 { 00:20:13.700 "subsystem": "keyring", 00:20:13.700 "config": [ 00:20:13.700 { 00:20:13.700 "method": "keyring_file_add_key", 00:20:13.700 "params": { 00:20:13.700 "name": "key0", 00:20:13.700 "path": "/tmp/tmp.lE7rsacguk" 00:20:13.700 } 00:20:13.700 } 00:20:13.700 ] 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "subsystem": "iobuf", 00:20:13.700 "config": [ 00:20:13.700 { 00:20:13.700 "method": "iobuf_set_options", 00:20:13.700 "params": { 00:20:13.700 "small_pool_count": 8192, 00:20:13.700 "large_pool_count": 1024, 00:20:13.700 "small_bufsize": 8192, 00:20:13.700 "large_bufsize": 135168, 00:20:13.700 "enable_numa": false 00:20:13.700 } 00:20:13.700 } 00:20:13.700 ] 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "subsystem": "sock", 00:20:13.700 "config": [ 00:20:13.700 { 00:20:13.700 "method": "sock_set_default_impl", 00:20:13.700 "params": { 00:20:13.700 "impl_name": "posix" 00:20:13.700 } 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "method": "sock_impl_set_options", 00:20:13.700 "params": { 00:20:13.700 "impl_name": "ssl", 00:20:13.700 "recv_buf_size": 4096, 00:20:13.700 "send_buf_size": 4096, 00:20:13.700 "enable_recv_pipe": true, 00:20:13.700 "enable_quickack": false, 00:20:13.700 "enable_placement_id": 0, 00:20:13.700 "enable_zerocopy_send_server": true, 00:20:13.700 "enable_zerocopy_send_client": false, 00:20:13.700 "zerocopy_threshold": 0, 00:20:13.700 "tls_version": 0, 00:20:13.700 "enable_ktls": false 00:20:13.700 } 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "method": "sock_impl_set_options", 00:20:13.700 "params": { 00:20:13.700 "impl_name": "posix", 00:20:13.700 "recv_buf_size": 2097152, 00:20:13.700 "send_buf_size": 2097152, 00:20:13.700 "enable_recv_pipe": true, 00:20:13.700 "enable_quickack": false, 00:20:13.700 "enable_placement_id": 0, 00:20:13.700 "enable_zerocopy_send_server": true, 00:20:13.700 "enable_zerocopy_send_client": false, 00:20:13.700 "zerocopy_threshold": 0, 00:20:13.700 "tls_version": 0, 00:20:13.700 "enable_ktls": false 00:20:13.700 } 00:20:13.700 } 00:20:13.700 ] 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "subsystem": "vmd", 00:20:13.700 "config": [] 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "subsystem": "accel", 00:20:13.700 "config": [ 00:20:13.700 { 00:20:13.700 "method": "accel_set_options", 00:20:13.700 "params": { 00:20:13.700 "small_cache_size": 128, 00:20:13.700 "large_cache_size": 16, 00:20:13.700 "task_count": 2048, 00:20:13.700 "sequence_count": 2048, 00:20:13.700 "buf_count": 2048 00:20:13.700 } 00:20:13.700 } 00:20:13.700 ] 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "subsystem": "bdev", 00:20:13.700 "config": [ 00:20:13.700 { 00:20:13.700 "method": "bdev_set_options", 00:20:13.700 "params": { 00:20:13.700 "bdev_io_pool_size": 65535, 00:20:13.700 "bdev_io_cache_size": 256, 00:20:13.700 "bdev_auto_examine": true, 00:20:13.700 "iobuf_small_cache_size": 128, 00:20:13.700 "iobuf_large_cache_size": 16 00:20:13.700 } 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "method": "bdev_raid_set_options", 00:20:13.700 "params": { 00:20:13.700 "process_window_size_kb": 1024, 00:20:13.700 "process_max_bandwidth_mb_sec": 0 00:20:13.700 } 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "method": "bdev_iscsi_set_options", 00:20:13.700 "params": { 00:20:13.700 "timeout_sec": 30 00:20:13.700 } 00:20:13.700 }, 00:20:13.700 { 00:20:13.700 "method": "bdev_nvme_set_options", 00:20:13.700 "params": { 00:20:13.700 "action_on_timeout": "none", 00:20:13.700 "timeout_us": 0, 00:20:13.700 "timeout_admin_us": 0, 00:20:13.700 "keep_alive_timeout_ms": 10000, 00:20:13.700 "arbitration_burst": 0, 00:20:13.700 "low_priority_weight": 0, 00:20:13.700 "medium_priority_weight": 0, 00:20:13.700 "high_priority_weight": 0, 00:20:13.700 "nvme_adminq_poll_period_us": 10000, 00:20:13.700 "nvme_ioq_poll_period_us": 0, 00:20:13.700 "io_queue_requests": 512, 00:20:13.700 "delay_cmd_submit": true, 00:20:13.700 "transport_retry_count": 4, 00:20:13.700 "bdev_retry_count": 3, 00:20:13.700 "transport_ack_timeout": 0, 00:20:13.700 "ctrlr_loss_timeout_sec": 0, 00:20:13.700 "reconnect_delay_sec": 0, 00:20:13.700 "fast_io_fail_timeout_sec": 0, 00:20:13.700 "disable_auto_failback": false, 00:20:13.700 "generate_uuids": false, 00:20:13.700 "transport_tos": 0, 00:20:13.700 "nvme_error_stat": false, 00:20:13.700 "rdma_srq_size": 0, 00:20:13.701 "io_path_stat": false, 00:20:13.701 "allow_accel_sequence": false, 00:20:13.701 "rdma_max_cq_size": 0, 00:20:13.701 "rdma_cm_event_timeout_ms": 0, 00:20:13.701 "dhchap_digests": [ 00:20:13.701 "sha256", 00:20:13.701 "sha384", 00:20:13.701 "sha512" 00:20:13.701 ], 00:20:13.701 "dhchap_dhgroups": [ 00:20:13.701 "null", 00:20:13.701 "ffdhe2048", 00:20:13.701 "ffdhe3072", 00:20:13.701 "ffdhe4096", 00:20:13.701 "ffdhe6144", 00:20:13.701 "ffdhe8192" 00:20:13.701 ], 00:20:13.701 "rdma_umr_per_io": false 00:20:13.701 } 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "method": "bdev_nvme_attach_controller", 00:20:13.701 "params": { 00:20:13.701 "name": "nvme0", 00:20:13.701 "trtype": "TCP", 00:20:13.701 "adrfam": "IPv4", 00:20:13.701 "traddr": "10.0.0.2", 00:20:13.701 "trsvcid": "4420", 00:20:13.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.701 "prchk_reftag": false, 00:20:13.701 "prchk_guard": false, 00:20:13.701 "ctrlr_loss_timeout_sec": 0, 00:20:13.701 "reconnect_delay_sec": 0, 00:20:13.701 "fast_io_fail_timeout_sec": 0, 00:20:13.701 "psk": "key0", 00:20:13.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.701 "hdgst": false, 00:20:13.701 "ddgst": false, 00:20:13.701 "multipath": "multipath" 00:20:13.701 } 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "method": "bdev_nvme_set_hotplug", 00:20:13.701 "params": { 00:20:13.701 "period_us": 100000, 00:20:13.701 "enable": false 00:20:13.701 } 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "method": "bdev_enable_histogram", 00:20:13.701 "params": { 00:20:13.701 "name": "nvme0n1", 00:20:13.701 "enable": true 00:20:13.701 } 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "method": "bdev_wait_for_examine" 00:20:13.701 } 00:20:13.701 ] 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "subsystem": "nbd", 00:20:13.701 "config": [] 00:20:13.701 } 00:20:13.701 ] 00:20:13.701 }' 00:20:13.701 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.701 [2024-12-10 22:51:21.372643] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:13.701 [2024-12-10 22:51:21.372717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92832 ] 00:20:13.959 [2024-12-10 22:51:21.438854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.959 [2024-12-10 22:51:21.495395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.959 [2024-12-10 22:51:21.668419] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.217 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.217 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.217 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:14.217 22:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:14.475 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.475 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.475 Running I/O for 1 seconds... 00:20:15.851 3626.00 IOPS, 14.16 MiB/s 00:20:15.851 Latency(us) 00:20:15.851 [2024-12-10T21:51:23.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.851 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.851 Verification LBA range: start 0x0 length 0x2000 00:20:15.851 nvme0n1 : 1.02 3671.45 14.34 0.00 0.00 34502.61 6407.96 27185.30 00:20:15.851 [2024-12-10T21:51:23.583Z] =================================================================================================================== 00:20:15.851 [2024-12-10T21:51:23.583Z] Total : 3671.45 14.34 0.00 0.00 34502.61 6407.96 27185.30 00:20:15.851 { 00:20:15.851 "results": [ 00:20:15.851 { 00:20:15.851 "job": "nvme0n1", 00:20:15.851 "core_mask": "0x2", 00:20:15.851 "workload": "verify", 00:20:15.851 "status": "finished", 00:20:15.851 "verify_range": { 00:20:15.851 "start": 0, 00:20:15.851 "length": 8192 00:20:15.851 }, 00:20:15.851 "queue_depth": 128, 00:20:15.851 "io_size": 4096, 00:20:15.851 "runtime": 1.022483, 00:20:15.851 "iops": 3671.45468433216, 00:20:15.851 "mibps": 14.3416198606725, 00:20:15.851 "io_failed": 0, 00:20:15.851 "io_timeout": 0, 00:20:15.851 "avg_latency_us": 34502.61102231693, 00:20:15.851 "min_latency_us": 6407.964444444445, 00:20:15.851 "max_latency_us": 27185.303703703703 00:20:15.851 } 00:20:15.851 ], 00:20:15.851 "core_count": 1 00:20:15.851 } 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:15.851 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:15.852 nvmf_trace.0 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 92832 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 92832 ']' 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 92832 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92832 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92832' 00:20:15.852 killing process with pid 92832 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 92832 00:20:15.852 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.852 00:20:15.852 Latency(us) 00:20:15.852 [2024-12-10T21:51:23.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.852 [2024-12-10T21:51:23.584Z] =================================================================================================================== 00:20:15.852 [2024-12-10T21:51:23.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 92832 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.852 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.852 rmmod nvme_tcp 00:20:15.852 rmmod nvme_fabrics 00:20:15.852 rmmod nvme_keyring 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 92681 ']' 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 92681 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 92681 ']' 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 92681 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92681 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92681' 00:20:16.110 killing process with pid 92681 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 92681 00:20:16.110 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 92681 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.370 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9gny5Vn9Jd /tmp/tmp.sF39X5ILYt /tmp/tmp.lE7rsacguk 00:20:18.279 00:20:18.279 real 1m22.999s 00:20:18.279 user 2m16.848s 00:20:18.279 sys 0m25.515s 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.279 ************************************ 00:20:18.279 END TEST nvmf_tls 00:20:18.279 ************************************ 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:18.279 ************************************ 00:20:18.279 START TEST nvmf_fips 00:20:18.279 ************************************ 00:20:18.279 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:18.539 * Looking for test storage... 00:20:18.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.539 --rc genhtml_branch_coverage=1 00:20:18.539 --rc genhtml_function_coverage=1 00:20:18.539 --rc genhtml_legend=1 00:20:18.539 --rc geninfo_all_blocks=1 00:20:18.539 --rc geninfo_unexecuted_blocks=1 00:20:18.539 00:20:18.539 ' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.539 --rc genhtml_branch_coverage=1 00:20:18.539 --rc genhtml_function_coverage=1 00:20:18.539 --rc genhtml_legend=1 00:20:18.539 --rc geninfo_all_blocks=1 00:20:18.539 --rc geninfo_unexecuted_blocks=1 00:20:18.539 00:20:18.539 ' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.539 --rc genhtml_branch_coverage=1 00:20:18.539 --rc genhtml_function_coverage=1 00:20:18.539 --rc genhtml_legend=1 00:20:18.539 --rc geninfo_all_blocks=1 00:20:18.539 --rc geninfo_unexecuted_blocks=1 00:20:18.539 00:20:18.539 ' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.539 --rc genhtml_branch_coverage=1 00:20:18.539 --rc genhtml_function_coverage=1 00:20:18.539 --rc genhtml_legend=1 00:20:18.539 --rc geninfo_all_blocks=1 00:20:18.539 --rc geninfo_unexecuted_blocks=1 00:20:18.539 00:20:18.539 ' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.539 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:18.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:18.540 Error setting digest 00:20:18.540 4082E618D67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:18.540 4082E618D67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:18.540 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:18.541 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.541 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.541 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.541 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:18.541 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:18.541 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:18.541 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:21.099 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:21.099 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:21.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:21.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.099 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:21.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:20:21.100 00:20:21.100 --- 10.0.0.2 ping statistics --- 00:20:21.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.100 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:20:21.100 00:20:21.100 --- 10.0.0.1 ping statistics --- 00:20:21.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.100 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=95068 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 95068 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 95068 ']' 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.100 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.100 [2024-12-10 22:51:28.580000] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:21.100 [2024-12-10 22:51:28.580109] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.100 [2024-12-10 22:51:28.651861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.100 [2024-12-10 22:51:28.704765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.100 [2024-12-10 22:51:28.704823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.100 [2024-12-10 22:51:28.704845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.100 [2024-12-10 22:51:28.704856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.100 [2024-12-10 22:51:28.704865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.100 [2024-12-10 22:51:28.705411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.67N 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.67N 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.67N 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.67N 00:20:21.380 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:21.380 [2024-12-10 22:51:29.096575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.638 [2024-12-10 22:51:29.112578] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.638 [2024-12-10 22:51:29.112812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.638 malloc0 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=95222 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 95222 /var/tmp/bdevperf.sock 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 95222 ']' 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.638 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.638 [2024-12-10 22:51:29.245053] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:21.638 [2024-12-10 22:51:29.245153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95222 ] 00:20:21.638 [2024-12-10 22:51:29.311730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.896 [2024-12-10 22:51:29.368714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.896 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.896 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:21.896 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.67N 00:20:22.154 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:22.412 [2024-12-10 22:51:29.992102] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.412 TLSTESTn1 00:20:22.412 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.670 Running I/O for 10 seconds... 00:20:24.537 3438.00 IOPS, 13.43 MiB/s [2024-12-10T21:51:33.643Z] 3505.00 IOPS, 13.69 MiB/s [2024-12-10T21:51:34.209Z] 3527.33 IOPS, 13.78 MiB/s [2024-12-10T21:51:35.583Z] 3550.00 IOPS, 13.87 MiB/s [2024-12-10T21:51:36.516Z] 3542.80 IOPS, 13.84 MiB/s [2024-12-10T21:51:37.451Z] 3534.33 IOPS, 13.81 MiB/s [2024-12-10T21:51:38.384Z] 3532.86 IOPS, 13.80 MiB/s [2024-12-10T21:51:39.318Z] 3545.50 IOPS, 13.85 MiB/s [2024-12-10T21:51:40.251Z] 3556.44 IOPS, 13.89 MiB/s [2024-12-10T21:51:40.251Z] 3554.30 IOPS, 13.88 MiB/s 00:20:32.519 Latency(us) 00:20:32.519 [2024-12-10T21:51:40.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.519 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.519 Verification LBA range: start 0x0 length 0x2000 00:20:32.519 TLSTESTn1 : 10.02 3559.97 13.91 0.00 0.00 35896.28 6359.42 41748.86 00:20:32.519 [2024-12-10T21:51:40.251Z] =================================================================================================================== 00:20:32.519 [2024-12-10T21:51:40.251Z] Total : 3559.97 13.91 0.00 0.00 35896.28 6359.42 41748.86 00:20:32.519 { 00:20:32.519 "results": [ 00:20:32.519 { 00:20:32.519 "job": "TLSTESTn1", 00:20:32.519 "core_mask": "0x4", 00:20:32.519 "workload": "verify", 00:20:32.519 "status": "finished", 00:20:32.519 "verify_range": { 00:20:32.519 "start": 0, 00:20:32.519 "length": 8192 00:20:32.519 }, 00:20:32.519 "queue_depth": 128, 00:20:32.519 "io_size": 4096, 00:20:32.519 "runtime": 10.019466, 00:20:32.519 "iops": 3559.9701620824903, 00:20:32.519 "mibps": 13.906133445634728, 00:20:32.519 "io_failed": 0, 00:20:32.519 "io_timeout": 0, 00:20:32.519 "avg_latency_us": 35896.27982418596, 00:20:32.519 "min_latency_us": 6359.419259259259, 00:20:32.519 "max_latency_us": 41748.85925925926 00:20:32.519 } 00:20:32.519 ], 00:20:32.519 "core_count": 1 00:20:32.519 } 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:32.777 nvmf_trace.0 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 95222 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 95222 ']' 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 95222 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95222 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95222' 00:20:32.777 killing process with pid 95222 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 95222 00:20:32.777 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.777 00:20:32.777 Latency(us) 00:20:32.777 [2024-12-10T21:51:40.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.777 [2024-12-10T21:51:40.509Z] =================================================================================================================== 00:20:32.777 [2024-12-10T21:51:40.509Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.777 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 95222 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.035 rmmod nvme_tcp 00:20:33.035 rmmod nvme_fabrics 00:20:33.035 rmmod nvme_keyring 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:33.035 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 95068 ']' 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 95068 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 95068 ']' 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 95068 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95068 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95068' 00:20:33.036 killing process with pid 95068 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 95068 00:20:33.036 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 95068 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.294 22:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.831 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.831 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.67N 00:20:35.831 00:20:35.832 real 0m16.966s 00:20:35.832 user 0m22.485s 00:20:35.832 sys 0m5.376s 00:20:35.832 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.832 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:35.832 ************************************ 00:20:35.832 END TEST nvmf_fips 00:20:35.832 ************************************ 00:20:35.832 22:51:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:35.832 22:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.832 22:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.832 22:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:35.832 ************************************ 00:20:35.832 START TEST nvmf_control_msg_list 00:20:35.832 ************************************ 00:20:35.832 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:35.832 * Looking for test storage... 00:20:35.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:35.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.832 --rc genhtml_branch_coverage=1 00:20:35.832 --rc genhtml_function_coverage=1 00:20:35.832 --rc genhtml_legend=1 00:20:35.832 --rc geninfo_all_blocks=1 00:20:35.832 --rc geninfo_unexecuted_blocks=1 00:20:35.832 00:20:35.832 ' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:35.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.832 --rc genhtml_branch_coverage=1 00:20:35.832 --rc genhtml_function_coverage=1 00:20:35.832 --rc genhtml_legend=1 00:20:35.832 --rc geninfo_all_blocks=1 00:20:35.832 --rc geninfo_unexecuted_blocks=1 00:20:35.832 00:20:35.832 ' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:35.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.832 --rc genhtml_branch_coverage=1 00:20:35.832 --rc genhtml_function_coverage=1 00:20:35.832 --rc genhtml_legend=1 00:20:35.832 --rc geninfo_all_blocks=1 00:20:35.832 --rc geninfo_unexecuted_blocks=1 00:20:35.832 00:20:35.832 ' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:35.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.832 --rc genhtml_branch_coverage=1 00:20:35.832 --rc genhtml_function_coverage=1 00:20:35.832 --rc genhtml_legend=1 00:20:35.832 --rc geninfo_all_blocks=1 00:20:35.832 --rc geninfo_unexecuted_blocks=1 00:20:35.832 00:20:35.832 ' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:35.832 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.833 22:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:37.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:37.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:37.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:37.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.733 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:37.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:20:37.734 00:20:37.734 --- 10.0.0.2 ping statistics --- 00:20:37.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.734 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:20:37.734 00:20:37.734 --- 10.0.0.1 ping statistics --- 00:20:37.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.734 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=98487 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 98487 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 98487 ']' 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.734 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:37.992 [2024-12-10 22:51:45.485361] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:37.992 [2024-12-10 22:51:45.485456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.992 [2024-12-10 22:51:45.556369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.992 [2024-12-10 22:51:45.612126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.992 [2024-12-10 22:51:45.612181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.992 [2024-12-10 22:51:45.612204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.992 [2024-12-10 22:51:45.612215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.992 [2024-12-10 22:51:45.612225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.992 [2024-12-10 22:51:45.612834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 [2024-12-10 22:51:45.761056] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 Malloc0 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 [2024-12-10 22:51:45.801217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.250 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=98507 00:20:38.251 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.251 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=98508 00:20:38.251 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.251 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=98509 00:20:38.251 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 98507 00:20:38.251 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.251 [2024-12-10 22:51:45.869941] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.251 [2024-12-10 22:51:45.870260] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.251 [2024-12-10 22:51:45.879504] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:39.624 Initializing NVMe Controllers 00:20:39.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:39.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:39.624 Initialization complete. Launching workers. 00:20:39.624 ======================================================== 00:20:39.624 Latency(us) 00:20:39.624 Device Information : IOPS MiB/s Average min max 00:20:39.624 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40930.00 40819.72 41586.82 00:20:39.624 ======================================================== 00:20:39.624 Total : 25.00 0.10 40930.00 40819.72 41586.82 00:20:39.624 00:20:39.624 Initializing NVMe Controllers 00:20:39.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:39.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:39.624 Initialization complete. Launching workers. 00:20:39.624 ======================================================== 00:20:39.624 Latency(us) 00:20:39.624 Device Information : IOPS MiB/s Average min max 00:20:39.624 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 28.00 0.11 36504.41 247.92 40961.75 00:20:39.624 ======================================================== 00:20:39.624 Total : 28.00 0.11 36504.41 247.92 40961.75 00:20:39.624 00:20:39.624 Initializing NVMe Controllers 00:20:39.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:39.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:39.624 Initialization complete. Launching workers. 00:20:39.624 ======================================================== 00:20:39.624 Latency(us) 00:20:39.624 Device Information : IOPS MiB/s Average min max 00:20:39.624 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6054.99 23.65 164.77 154.30 442.97 00:20:39.624 ======================================================== 00:20:39.624 Total : 6054.99 23.65 164.77 154.30 442.97 00:20:39.624 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 98508 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 98509 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.624 rmmod nvme_tcp 00:20:39.624 rmmod nvme_fabrics 00:20:39.624 rmmod nvme_keyring 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 98487 ']' 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 98487 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 98487 ']' 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 98487 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98487 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98487' 00:20:39.624 killing process with pid 98487 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 98487 00:20:39.624 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 98487 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.884 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.790 00:20:41.790 real 0m6.469s 00:20:41.790 user 0m5.812s 00:20:41.790 sys 0m2.723s 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.790 ************************************ 00:20:41.790 END TEST nvmf_control_msg_list 00:20:41.790 ************************************ 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.790 ************************************ 00:20:41.790 START TEST nvmf_wait_for_buf 00:20:41.790 ************************************ 00:20:41.790 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:42.049 * Looking for test storage... 00:20:42.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:42.049 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.050 --rc genhtml_branch_coverage=1 00:20:42.050 --rc genhtml_function_coverage=1 00:20:42.050 --rc genhtml_legend=1 00:20:42.050 --rc geninfo_all_blocks=1 00:20:42.050 --rc geninfo_unexecuted_blocks=1 00:20:42.050 00:20:42.050 ' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.050 --rc genhtml_branch_coverage=1 00:20:42.050 --rc genhtml_function_coverage=1 00:20:42.050 --rc genhtml_legend=1 00:20:42.050 --rc geninfo_all_blocks=1 00:20:42.050 --rc geninfo_unexecuted_blocks=1 00:20:42.050 00:20:42.050 ' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.050 --rc genhtml_branch_coverage=1 00:20:42.050 --rc genhtml_function_coverage=1 00:20:42.050 --rc genhtml_legend=1 00:20:42.050 --rc geninfo_all_blocks=1 00:20:42.050 --rc geninfo_unexecuted_blocks=1 00:20:42.050 00:20:42.050 ' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.050 --rc genhtml_branch_coverage=1 00:20:42.050 --rc genhtml_function_coverage=1 00:20:42.050 --rc genhtml_legend=1 00:20:42.050 --rc geninfo_all_blocks=1 00:20:42.050 --rc geninfo_unexecuted_blocks=1 00:20:42.050 00:20:42.050 ' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.050 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:44.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:44.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.584 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:44.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:44.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:20:44.585 00:20:44.585 --- 10.0.0.2 ping statistics --- 00:20:44.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.585 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:44.585 00:20:44.585 --- 10.0.0.1 ping statistics --- 00:20:44.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.585 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=100706 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 100706 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 100706 ']' 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.585 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.585 [2024-12-10 22:51:52.027697] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:44.585 [2024-12-10 22:51:52.027774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.585 [2024-12-10 22:51:52.099365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.585 [2024-12-10 22:51:52.154717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.585 [2024-12-10 22:51:52.154773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.585 [2024-12-10 22:51:52.154796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.585 [2024-12-10 22:51:52.154807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.585 [2024-12-10 22:51:52.154817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.585 [2024-12-10 22:51:52.155404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.585 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 Malloc0 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 [2024-12-10 22:51:52.377503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 [2024-12-10 22:51:52.401724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.844 22:51:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.844 [2024-12-10 22:51:52.481041] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:46.216 Initializing NVMe Controllers 00:20:46.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:46.216 Initialization complete. Launching workers. 00:20:46.216 ======================================================== 00:20:46.216 Latency(us) 00:20:46.216 Device Information : IOPS MiB/s Average min max 00:20:46.216 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32354.13 8006.27 63864.04 00:20:46.216 ======================================================== 00:20:46.216 Total : 129.00 16.12 32354.13 8006.27 63864.04 00:20:46.216 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.474 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.474 rmmod nvme_tcp 00:20:46.474 rmmod nvme_fabrics 00:20:46.474 rmmod nvme_keyring 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 100706 ']' 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 100706 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 100706 ']' 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 100706 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100706 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100706' 00:20:46.474 killing process with pid 100706 00:20:46.474 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 100706 00:20:46.475 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 100706 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.734 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.642 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.642 00:20:48.643 real 0m6.800s 00:20:48.643 user 0m3.222s 00:20:48.643 sys 0m1.997s 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.643 ************************************ 00:20:48.643 END TEST nvmf_wait_for_buf 00:20:48.643 ************************************ 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.643 22:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:51.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:51.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:51.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:51.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.206 ************************************ 00:20:51.206 START TEST nvmf_perf_adq 00:20:51.206 ************************************ 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:51.206 * Looking for test storage... 00:20:51.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:51.206 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:51.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.207 --rc genhtml_branch_coverage=1 00:20:51.207 --rc genhtml_function_coverage=1 00:20:51.207 --rc genhtml_legend=1 00:20:51.207 --rc geninfo_all_blocks=1 00:20:51.207 --rc geninfo_unexecuted_blocks=1 00:20:51.207 00:20:51.207 ' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:51.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.207 --rc genhtml_branch_coverage=1 00:20:51.207 --rc genhtml_function_coverage=1 00:20:51.207 --rc genhtml_legend=1 00:20:51.207 --rc geninfo_all_blocks=1 00:20:51.207 --rc geninfo_unexecuted_blocks=1 00:20:51.207 00:20:51.207 ' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:51.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.207 --rc genhtml_branch_coverage=1 00:20:51.207 --rc genhtml_function_coverage=1 00:20:51.207 --rc genhtml_legend=1 00:20:51.207 --rc geninfo_all_blocks=1 00:20:51.207 --rc geninfo_unexecuted_blocks=1 00:20:51.207 00:20:51.207 ' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:51.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.207 --rc genhtml_branch_coverage=1 00:20:51.207 --rc genhtml_function_coverage=1 00:20:51.207 --rc genhtml_legend=1 00:20:51.207 --rc geninfo_all_blocks=1 00:20:51.207 --rc geninfo_unexecuted_blocks=1 00:20:51.207 00:20:51.207 ' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.207 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.132 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:53.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:53.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:53.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:53.392 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:53.392 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:53.961 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:56.500 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:01.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.773 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:01.774 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:01.774 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:01.774 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:01.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:21:01.774 00:21:01.774 --- 10.0.0.2 ping statistics --- 00:21:01.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.774 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:21:01.774 00:21:01.774 --- 10.0.0.1 ping statistics --- 00:21:01.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.774 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=105556 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 105556 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 105556 ']' 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.774 [2024-12-10 22:52:09.216071] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:01.774 [2024-12-10 22:52:09.216144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.774 [2024-12-10 22:52:09.288615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.774 [2024-12-10 22:52:09.346934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.774 [2024-12-10 22:52:09.346982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.774 [2024-12-10 22:52:09.347006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.774 [2024-12-10 22:52:09.347017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.774 [2024-12-10 22:52:09.347027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.774 [2024-12-10 22:52:09.348520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.774 [2024-12-10 22:52:09.348606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.774 [2024-12-10 22:52:09.348601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.774 [2024-12-10 22:52:09.348568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:01.774 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:01.775 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:01.775 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.775 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.775 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.034 [2024-12-10 22:52:09.626542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.034 Malloc1 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.034 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.035 [2024-12-10 22:52:09.688935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=105588 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:02.035 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:04.574 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:04.574 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.574 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.574 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.574 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:04.574 "tick_rate": 2700000000, 00:21:04.574 "poll_groups": [ 00:21:04.574 { 00:21:04.574 "name": "nvmf_tgt_poll_group_000", 00:21:04.574 "admin_qpairs": 1, 00:21:04.574 "io_qpairs": 1, 00:21:04.574 "current_admin_qpairs": 1, 00:21:04.574 "current_io_qpairs": 1, 00:21:04.574 "pending_bdev_io": 0, 00:21:04.574 "completed_nvme_io": 19318, 00:21:04.574 "transports": [ 00:21:04.574 { 00:21:04.574 "trtype": "TCP" 00:21:04.574 } 00:21:04.574 ] 00:21:04.574 }, 00:21:04.574 { 00:21:04.574 "name": "nvmf_tgt_poll_group_001", 00:21:04.574 "admin_qpairs": 0, 00:21:04.574 "io_qpairs": 1, 00:21:04.574 "current_admin_qpairs": 0, 00:21:04.574 "current_io_qpairs": 1, 00:21:04.574 "pending_bdev_io": 0, 00:21:04.574 "completed_nvme_io": 19880, 00:21:04.574 "transports": [ 00:21:04.574 { 00:21:04.574 "trtype": "TCP" 00:21:04.574 } 00:21:04.574 ] 00:21:04.574 }, 00:21:04.574 { 00:21:04.574 "name": "nvmf_tgt_poll_group_002", 00:21:04.574 "admin_qpairs": 0, 00:21:04.574 "io_qpairs": 1, 00:21:04.574 "current_admin_qpairs": 0, 00:21:04.574 "current_io_qpairs": 1, 00:21:04.574 "pending_bdev_io": 0, 00:21:04.574 "completed_nvme_io": 19891, 00:21:04.574 "transports": [ 00:21:04.574 { 00:21:04.574 "trtype": "TCP" 00:21:04.574 } 00:21:04.574 ] 00:21:04.574 }, 00:21:04.574 { 00:21:04.574 "name": "nvmf_tgt_poll_group_003", 00:21:04.574 "admin_qpairs": 0, 00:21:04.574 "io_qpairs": 1, 00:21:04.574 "current_admin_qpairs": 0, 00:21:04.574 "current_io_qpairs": 1, 00:21:04.574 "pending_bdev_io": 0, 00:21:04.574 "completed_nvme_io": 19846, 00:21:04.574 "transports": [ 00:21:04.574 { 00:21:04.574 "trtype": "TCP" 00:21:04.574 } 00:21:04.574 ] 00:21:04.574 } 00:21:04.574 ] 00:21:04.574 }' 00:21:04.575 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:04.575 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:04.575 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:04.575 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:04.575 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 105588 00:21:12.690 Initializing NVMe Controllers 00:21:12.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:12.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:12.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:12.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:12.690 Initialization complete. Launching workers. 00:21:12.690 ======================================================== 00:21:12.690 Latency(us) 00:21:12.690 Device Information : IOPS MiB/s Average min max 00:21:12.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10372.70 40.52 6169.74 2381.11 10187.22 00:21:12.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10474.20 40.91 6111.41 2294.10 10192.51 00:21:12.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10508.00 41.05 6091.07 2392.63 10115.07 00:21:12.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10330.80 40.35 6195.62 1995.27 10997.49 00:21:12.690 ======================================================== 00:21:12.690 Total : 41685.69 162.83 6141.67 1995.27 10997.49 00:21:12.690 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.690 rmmod nvme_tcp 00:21:12.690 rmmod nvme_fabrics 00:21:12.690 rmmod nvme_keyring 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 105556 ']' 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 105556 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 105556 ']' 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 105556 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105556 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105556' 00:21:12.690 killing process with pid 105556 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 105556 00:21:12.690 22:52:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 105556 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.690 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.596 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.596 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:14.596 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:14.596 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:15.533 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:18.064 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.341 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:23.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:23.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:23.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:23.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:21:23.342 00:21:23.342 --- 10.0.0.2 ping statistics --- 00:21:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.342 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:21:23.342 00:21:23.342 --- 10.0.0.1 ping statistics --- 00:21:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.342 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.342 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:23.343 net.core.busy_poll = 1 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:23.343 net.core.busy_read = 1 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=108205 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 108205 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 108205 ']' 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 [2024-12-10 22:52:30.588350] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:23.343 [2024-12-10 22:52:30.588452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.343 [2024-12-10 22:52:30.664062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.343 [2024-12-10 22:52:30.723338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.343 [2024-12-10 22:52:30.723405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.343 [2024-12-10 22:52:30.723433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.343 [2024-12-10 22:52:30.723444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.343 [2024-12-10 22:52:30.723453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.343 [2024-12-10 22:52:30.725047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.343 [2024-12-10 22:52:30.725113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.343 [2024-12-10 22:52:30.725179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.343 [2024-12-10 22:52:30.725182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 [2024-12-10 22:52:31.008033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 Malloc1 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.343 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 [2024-12-10 22:52:31.069776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.602 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.602 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=108357 00:21:23.602 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:23.602 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:25.502 "tick_rate": 2700000000, 00:21:25.502 "poll_groups": [ 00:21:25.502 { 00:21:25.502 "name": "nvmf_tgt_poll_group_000", 00:21:25.502 "admin_qpairs": 1, 00:21:25.502 "io_qpairs": 3, 00:21:25.502 "current_admin_qpairs": 1, 00:21:25.502 "current_io_qpairs": 3, 00:21:25.502 "pending_bdev_io": 0, 00:21:25.502 "completed_nvme_io": 26204, 00:21:25.502 "transports": [ 00:21:25.502 { 00:21:25.502 "trtype": "TCP" 00:21:25.502 } 00:21:25.502 ] 00:21:25.502 }, 00:21:25.502 { 00:21:25.502 "name": "nvmf_tgt_poll_group_001", 00:21:25.502 "admin_qpairs": 0, 00:21:25.502 "io_qpairs": 1, 00:21:25.502 "current_admin_qpairs": 0, 00:21:25.502 "current_io_qpairs": 1, 00:21:25.502 "pending_bdev_io": 0, 00:21:25.502 "completed_nvme_io": 24944, 00:21:25.502 "transports": [ 00:21:25.502 { 00:21:25.502 "trtype": "TCP" 00:21:25.502 } 00:21:25.502 ] 00:21:25.502 }, 00:21:25.502 { 00:21:25.502 "name": "nvmf_tgt_poll_group_002", 00:21:25.502 "admin_qpairs": 0, 00:21:25.502 "io_qpairs": 0, 00:21:25.502 "current_admin_qpairs": 0, 00:21:25.502 "current_io_qpairs": 0, 00:21:25.502 "pending_bdev_io": 0, 00:21:25.502 "completed_nvme_io": 0, 00:21:25.502 "transports": [ 00:21:25.502 { 00:21:25.502 "trtype": "TCP" 00:21:25.502 } 00:21:25.502 ] 00:21:25.502 }, 00:21:25.502 { 00:21:25.502 "name": "nvmf_tgt_poll_group_003", 00:21:25.502 "admin_qpairs": 0, 00:21:25.502 "io_qpairs": 0, 00:21:25.502 "current_admin_qpairs": 0, 00:21:25.502 "current_io_qpairs": 0, 00:21:25.502 "pending_bdev_io": 0, 00:21:25.502 "completed_nvme_io": 0, 00:21:25.502 "transports": [ 00:21:25.502 { 00:21:25.502 "trtype": "TCP" 00:21:25.502 } 00:21:25.502 ] 00:21:25.502 } 00:21:25.502 ] 00:21:25.502 }' 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:25.502 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 108357 00:21:33.614 Initializing NVMe Controllers 00:21:33.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:33.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:33.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:33.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:33.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:33.614 Initialization complete. Launching workers. 00:21:33.614 ======================================================== 00:21:33.614 Latency(us) 00:21:33.614 Device Information : IOPS MiB/s Average min max 00:21:33.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13396.19 52.33 4777.68 1726.36 46286.45 00:21:33.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4440.56 17.35 14416.92 1924.73 63867.11 00:21:33.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4911.26 19.18 13037.34 2001.96 62868.91 00:21:33.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4479.66 17.50 14295.31 2141.34 60320.97 00:21:33.614 ======================================================== 00:21:33.614 Total : 27227.68 106.36 9405.50 1726.36 63867.11 00:21:33.614 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.614 rmmod nvme_tcp 00:21:33.614 rmmod nvme_fabrics 00:21:33.614 rmmod nvme_keyring 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 108205 ']' 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 108205 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 108205 ']' 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 108205 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108205 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108205' 00:21:33.614 killing process with pid 108205 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 108205 00:21:33.614 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 108205 00:21:33.872 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.872 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.872 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.872 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:33.872 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:33.872 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.872 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.130 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.130 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:34.130 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.130 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.130 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:37.454 00:21:37.454 real 0m46.098s 00:21:37.454 user 2m40.754s 00:21:37.454 sys 0m9.135s 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.454 ************************************ 00:21:37.454 END TEST nvmf_perf_adq 00:21:37.454 ************************************ 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.454 ************************************ 00:21:37.454 START TEST nvmf_shutdown 00:21:37.454 ************************************ 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:37.454 * Looking for test storage... 00:21:37.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:37.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.454 --rc genhtml_branch_coverage=1 00:21:37.454 --rc genhtml_function_coverage=1 00:21:37.454 --rc genhtml_legend=1 00:21:37.454 --rc geninfo_all_blocks=1 00:21:37.454 --rc geninfo_unexecuted_blocks=1 00:21:37.454 00:21:37.454 ' 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:37.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.454 --rc genhtml_branch_coverage=1 00:21:37.454 --rc genhtml_function_coverage=1 00:21:37.454 --rc genhtml_legend=1 00:21:37.454 --rc geninfo_all_blocks=1 00:21:37.454 --rc geninfo_unexecuted_blocks=1 00:21:37.454 00:21:37.454 ' 00:21:37.454 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:37.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.454 --rc genhtml_branch_coverage=1 00:21:37.454 --rc genhtml_function_coverage=1 00:21:37.454 --rc genhtml_legend=1 00:21:37.454 --rc geninfo_all_blocks=1 00:21:37.455 --rc geninfo_unexecuted_blocks=1 00:21:37.455 00:21:37.455 ' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:37.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.455 --rc genhtml_branch_coverage=1 00:21:37.455 --rc genhtml_function_coverage=1 00:21:37.455 --rc genhtml_legend=1 00:21:37.455 --rc geninfo_all_blocks=1 00:21:37.455 --rc geninfo_unexecuted_blocks=1 00:21:37.455 00:21:37.455 ' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.455 ************************************ 00:21:37.455 START TEST nvmf_shutdown_tc1 00:21:37.455 ************************************ 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.455 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.357 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:39.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:39.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:39.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:39.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.358 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:21:39.617 00:21:39.617 --- 10.0.0.2 ping statistics --- 00:21:39.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.617 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:39.617 00:21:39.617 --- 10.0.0.1 ping statistics --- 00:21:39.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.617 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=111660 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 111660 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 111660 ']' 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.617 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:39.617 [2024-12-10 22:52:47.302243] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:39.617 [2024-12-10 22:52:47.302328] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.876 [2024-12-10 22:52:47.384608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.876 [2024-12-10 22:52:47.442778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.876 [2024-12-10 22:52:47.442830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.876 [2024-12-10 22:52:47.442860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.876 [2024-12-10 22:52:47.442872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.876 [2024-12-10 22:52:47.442882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.876 [2024-12-10 22:52:47.444381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.876 [2024-12-10 22:52:47.444405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.876 [2024-12-10 22:52:47.444464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:39.876 [2024-12-10 22:52:47.444467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.876 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:39.876 [2024-12-10 22:52:47.598249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 Malloc1 00:21:40.134 [2024-12-10 22:52:47.712063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.134 Malloc2 00:21:40.134 Malloc3 00:21:40.134 Malloc4 00:21:40.392 Malloc5 00:21:40.392 Malloc6 00:21:40.392 Malloc7 00:21:40.392 Malloc8 00:21:40.392 Malloc9 00:21:40.651 Malloc10 00:21:40.651 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.651 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:40.651 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.651 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.651 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=111843 00:21:40.651 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 111843 /var/tmp/bdevperf.sock 00:21:40.651 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 111843 ']' 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:40.652 { 00:21:40.652 "params": { 00:21:40.652 "name": "Nvme$subsystem", 00:21:40.652 "trtype": "$TEST_TRANSPORT", 00:21:40.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.652 "adrfam": "ipv4", 00:21:40.652 "trsvcid": "$NVMF_PORT", 00:21:40.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.652 "hdgst": ${hdgst:-false}, 00:21:40.652 "ddgst": ${ddgst:-false} 00:21:40.652 }, 00:21:40.652 "method": "bdev_nvme_attach_controller" 00:21:40.652 } 00:21:40.652 EOF 00:21:40.652 )") 00:21:40.652 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:40.653 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:40.653 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:40.653 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme1", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme2", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme3", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme4", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme5", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme6", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme7", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme8", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme9", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 },{ 00:21:40.653 "params": { 00:21:40.653 "name": "Nvme10", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:40.653 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false 00:21:40.653 }, 00:21:40.653 "method": "bdev_nvme_attach_controller" 00:21:40.653 }' 00:21:40.653 [2024-12-10 22:52:48.230668] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:40.653 [2024-12-10 22:52:48.230747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:40.653 [2024-12-10 22:52:48.302171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.653 [2024-12-10 22:52:48.361312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 111843 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:42.552 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:43.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 111843 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 111660 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.924 "adrfam": "ipv4", 00:21:43.924 "trsvcid": "$NVMF_PORT", 00:21:43.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.924 "hdgst": ${hdgst:-false}, 00:21:43.924 "ddgst": ${ddgst:-false} 00:21:43.924 }, 00:21:43.924 "method": "bdev_nvme_attach_controller" 00:21:43.924 } 00:21:43.924 EOF 00:21:43.924 )") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.924 "adrfam": "ipv4", 00:21:43.924 "trsvcid": "$NVMF_PORT", 00:21:43.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.924 "hdgst": ${hdgst:-false}, 00:21:43.924 "ddgst": ${ddgst:-false} 00:21:43.924 }, 00:21:43.924 "method": "bdev_nvme_attach_controller" 00:21:43.924 } 00:21:43.924 EOF 00:21:43.924 )") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.924 "adrfam": "ipv4", 00:21:43.924 "trsvcid": "$NVMF_PORT", 00:21:43.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.924 "hdgst": ${hdgst:-false}, 00:21:43.924 "ddgst": ${ddgst:-false} 00:21:43.924 }, 00:21:43.924 "method": "bdev_nvme_attach_controller" 00:21:43.924 } 00:21:43.924 EOF 00:21:43.924 )") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.924 "adrfam": "ipv4", 00:21:43.924 "trsvcid": "$NVMF_PORT", 00:21:43.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.924 "hdgst": ${hdgst:-false}, 00:21:43.924 "ddgst": ${ddgst:-false} 00:21:43.924 }, 00:21:43.924 "method": "bdev_nvme_attach_controller" 00:21:43.924 } 00:21:43.924 EOF 00:21:43.924 )") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.924 "adrfam": "ipv4", 00:21:43.924 "trsvcid": "$NVMF_PORT", 00:21:43.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.924 "hdgst": ${hdgst:-false}, 00:21:43.924 "ddgst": ${ddgst:-false} 00:21:43.924 }, 00:21:43.924 "method": "bdev_nvme_attach_controller" 00:21:43.924 } 00:21:43.924 EOF 00:21:43.924 )") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.924 "adrfam": "ipv4", 00:21:43.924 "trsvcid": "$NVMF_PORT", 00:21:43.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.924 "hdgst": ${hdgst:-false}, 00:21:43.924 "ddgst": ${ddgst:-false} 00:21:43.924 }, 00:21:43.924 "method": "bdev_nvme_attach_controller" 00:21:43.924 } 00:21:43.924 EOF 00:21:43.924 )") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.924 "adrfam": "ipv4", 00:21:43.924 "trsvcid": "$NVMF_PORT", 00:21:43.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.924 "hdgst": ${hdgst:-false}, 00:21:43.924 "ddgst": ${ddgst:-false} 00:21:43.924 }, 00:21:43.924 "method": "bdev_nvme_attach_controller" 00:21:43.924 } 00:21:43.924 EOF 00:21:43.924 )") 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.924 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.924 { 00:21:43.924 "params": { 00:21:43.924 "name": "Nvme$subsystem", 00:21:43.924 "trtype": "$TEST_TRANSPORT", 00:21:43.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "$NVMF_PORT", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.925 "hdgst": ${hdgst:-false}, 00:21:43.925 "ddgst": ${ddgst:-false} 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 } 00:21:43.925 EOF 00:21:43.925 )") 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.925 { 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme$subsystem", 00:21:43.925 "trtype": "$TEST_TRANSPORT", 00:21:43.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "$NVMF_PORT", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.925 "hdgst": ${hdgst:-false}, 00:21:43.925 "ddgst": ${ddgst:-false} 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 } 00:21:43.925 EOF 00:21:43.925 )") 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.925 { 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme$subsystem", 00:21:43.925 "trtype": "$TEST_TRANSPORT", 00:21:43.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "$NVMF_PORT", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.925 "hdgst": ${hdgst:-false}, 00:21:43.925 "ddgst": ${ddgst:-false} 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 } 00:21:43.925 EOF 00:21:43.925 )") 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:43.925 22:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme1", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme2", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme3", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme4", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme5", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme6", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme7", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme8", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme9", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 },{ 00:21:43.925 "params": { 00:21:43.925 "name": "Nvme10", 00:21:43.925 "trtype": "tcp", 00:21:43.925 "traddr": "10.0.0.2", 00:21:43.925 "adrfam": "ipv4", 00:21:43.925 "trsvcid": "4420", 00:21:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:43.925 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:43.925 "hdgst": false, 00:21:43.925 "ddgst": false 00:21:43.925 }, 00:21:43.925 "method": "bdev_nvme_attach_controller" 00:21:43.925 }' 00:21:43.925 [2024-12-10 22:52:51.326645] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:43.925 [2024-12-10 22:52:51.326729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112257 ] 00:21:43.925 [2024-12-10 22:52:51.398592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.925 [2024-12-10 22:52:51.457889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.298 Running I/O for 1 seconds... 00:21:46.671 1733.00 IOPS, 108.31 MiB/s 00:21:46.671 Latency(us) 00:21:46.671 [2024-12-10T21:52:54.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme1n1 : 1.02 188.04 11.75 0.00 0.00 336756.18 22330.79 276513.37 00:21:46.671 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme2n1 : 1.11 231.57 14.47 0.00 0.00 267620.50 19612.25 250104.79 00:21:46.671 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme3n1 : 1.10 243.29 15.21 0.00 0.00 248242.60 11505.21 257872.02 00:21:46.671 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme4n1 : 1.09 234.09 14.63 0.00 0.00 256680.20 17087.91 264085.81 00:21:46.671 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme5n1 : 1.11 230.82 14.43 0.00 0.00 256034.70 19709.35 262532.36 00:21:46.671 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme6n1 : 1.12 232.81 14.55 0.00 0.00 248146.71 6893.42 259425.47 00:21:46.671 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme7n1 : 1.12 229.03 14.31 0.00 0.00 249140.72 19126.80 265639.25 00:21:46.671 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme8n1 : 1.19 269.86 16.87 0.00 0.00 209253.87 12621.75 262532.36 00:21:46.671 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme9n1 : 1.17 225.00 14.06 0.00 0.00 241835.38 7427.41 265639.25 00:21:46.671 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.671 Verification LBA range: start 0x0 length 0x400 00:21:46.671 Nvme10n1 : 1.19 268.11 16.76 0.00 0.00 203859.06 5267.15 282727.16 00:21:46.671 [2024-12-10T21:52:54.403Z] =================================================================================================================== 00:21:46.671 [2024-12-10T21:52:54.403Z] Total : 2352.61 147.04 0.00 0.00 247466.61 5267.15 282727.16 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.928 rmmod nvme_tcp 00:21:46.928 rmmod nvme_fabrics 00:21:46.928 rmmod nvme_keyring 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 111660 ']' 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 111660 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 111660 ']' 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 111660 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111660 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111660' 00:21:46.928 killing process with pid 111660 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 111660 00:21:46.928 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 111660 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.496 22:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.400 00:21:49.400 real 0m12.157s 00:21:49.400 user 0m35.274s 00:21:49.400 sys 0m3.298s 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.400 ************************************ 00:21:49.400 END TEST nvmf_shutdown_tc1 00:21:49.400 ************************************ 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:49.400 ************************************ 00:21:49.400 START TEST nvmf_shutdown_tc2 00:21:49.400 ************************************ 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.400 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:49.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.401 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:49.401 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:49.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:49.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:21:49.661 00:21:49.661 --- 10.0.0.2 ping statistics --- 00:21:49.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.661 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:21:49.661 00:21:49.661 --- 10.0.0.1 ping statistics --- 00:21:49.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.661 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=113029 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 113029 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 113029 ']' 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.661 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.661 [2024-12-10 22:52:57.353536] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:49.661 [2024-12-10 22:52:57.353638] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.920 [2024-12-10 22:52:57.424194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.920 [2024-12-10 22:52:57.479824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.920 [2024-12-10 22:52:57.479882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.920 [2024-12-10 22:52:57.479910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.920 [2024-12-10 22:52:57.479921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.920 [2024-12-10 22:52:57.479930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.920 [2024-12-10 22:52:57.481408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.920 [2024-12-10 22:52:57.481473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.920 [2024-12-10 22:52:57.481538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.920 [2024-12-10 22:52:57.481541] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.920 [2024-12-10 22:52:57.631378] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.920 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.178 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.178 Malloc1 00:21:50.178 [2024-12-10 22:52:57.724655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.178 Malloc2 00:21:50.178 Malloc3 00:21:50.178 Malloc4 00:21:50.178 Malloc5 00:21:50.436 Malloc6 00:21:50.436 Malloc7 00:21:50.436 Malloc8 00:21:50.436 Malloc9 00:21:50.436 Malloc10 00:21:50.436 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.436 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:50.436 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.436 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.694 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=113205 00:21:50.694 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 113205 /var/tmp/bdevperf.sock 00:21:50.694 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 113205 ']' 00:21:50.694 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.695 { 00:21:50.695 "params": { 00:21:50.695 "name": "Nvme$subsystem", 00:21:50.695 "trtype": "$TEST_TRANSPORT", 00:21:50.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.695 "adrfam": "ipv4", 00:21:50.695 "trsvcid": "$NVMF_PORT", 00:21:50.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.695 "hdgst": ${hdgst:-false}, 00:21:50.695 "ddgst": ${ddgst:-false} 00:21:50.695 }, 00:21:50.695 "method": "bdev_nvme_attach_controller" 00:21:50.695 } 00:21:50.695 EOF 00:21:50.695 )") 00:21:50.695 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.696 { 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme$subsystem", 00:21:50.696 "trtype": "$TEST_TRANSPORT", 00:21:50.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "$NVMF_PORT", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.696 "hdgst": ${hdgst:-false}, 00:21:50.696 "ddgst": ${ddgst:-false} 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 } 00:21:50.696 EOF 00:21:50.696 )") 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.696 { 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme$subsystem", 00:21:50.696 "trtype": "$TEST_TRANSPORT", 00:21:50.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "$NVMF_PORT", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.696 "hdgst": ${hdgst:-false}, 00:21:50.696 "ddgst": ${ddgst:-false} 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 } 00:21:50.696 EOF 00:21:50.696 )") 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:50.696 22:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme1", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme2", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme3", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme4", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme5", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme6", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme7", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme8", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme9", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 },{ 00:21:50.696 "params": { 00:21:50.696 "name": "Nvme10", 00:21:50.696 "trtype": "tcp", 00:21:50.696 "traddr": "10.0.0.2", 00:21:50.696 "adrfam": "ipv4", 00:21:50.696 "trsvcid": "4420", 00:21:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:50.696 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:50.696 "hdgst": false, 00:21:50.696 "ddgst": false 00:21:50.696 }, 00:21:50.696 "method": "bdev_nvme_attach_controller" 00:21:50.696 }' 00:21:50.696 [2024-12-10 22:52:58.228593] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:50.696 [2024-12-10 22:52:58.228674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113205 ] 00:21:50.696 [2024-12-10 22:52:58.302706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.696 [2024-12-10 22:52:58.361892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.595 Running I/O for 10 seconds... 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=16 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 16 -ge 100 ']' 00:21:52.595 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:52.854 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:52.854 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:52.854 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:52.854 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:52.854 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.854 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:53.112 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.112 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:53.112 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:53.112 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 113205 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 113205 ']' 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 113205 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113205 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113205' 00:21:53.371 killing process with pid 113205 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 113205 00:21:53.371 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 113205 00:21:53.371 1864.00 IOPS, 116.50 MiB/s [2024-12-10T21:53:01.103Z] Received shutdown signal, test time was about 1.112185 seconds 00:21:53.371 00:21:53.371 Latency(us) 00:21:53.371 [2024-12-10T21:53:01.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.371 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme1n1 : 1.10 232.78 14.55 0.00 0.00 272122.31 18641.35 259425.47 00:21:53.371 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme2n1 : 1.08 236.26 14.77 0.00 0.00 262492.35 18641.35 256318.58 00:21:53.371 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme3n1 : 1.07 242.09 15.13 0.00 0.00 250344.35 10243.03 256318.58 00:21:53.371 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme4n1 : 1.08 241.73 15.11 0.00 0.00 247003.69 8641.04 250104.79 00:21:53.371 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme5n1 : 1.11 230.79 14.42 0.00 0.00 256129.33 20291.89 260978.92 00:21:53.371 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme6n1 : 1.11 231.56 14.47 0.00 0.00 250428.87 20874.43 256318.58 00:21:53.371 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme7n1 : 1.09 235.15 14.70 0.00 0.00 241779.29 18544.26 254765.13 00:21:53.371 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme8n1 : 1.10 233.76 14.61 0.00 0.00 239014.49 33981.63 240784.12 00:21:53.371 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme9n1 : 1.07 179.87 11.24 0.00 0.00 303607.15 26408.58 271853.04 00:21:53.371 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.371 Verification LBA range: start 0x0 length 0x400 00:21:53.371 Nvme10n1 : 1.11 230.35 14.40 0.00 0.00 233157.97 19126.80 290494.39 00:21:53.371 [2024-12-10T21:53:01.103Z] =================================================================================================================== 00:21:53.371 [2024-12-10T21:53:01.104Z] Total : 2294.34 143.40 0.00 0.00 254359.01 8641.04 290494.39 00:21:53.630 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 113029 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:55.001 rmmod nvme_tcp 00:21:55.001 rmmod nvme_fabrics 00:21:55.001 rmmod nvme_keyring 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 113029 ']' 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 113029 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 113029 ']' 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 113029 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113029 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113029' 00:21:55.001 killing process with pid 113029 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 113029 00:21:55.001 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 113029 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.260 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.806 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.806 00:21:57.806 real 0m7.847s 00:21:57.806 user 0m24.214s 00:21:57.806 sys 0m1.575s 00:21:57.806 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.806 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.806 ************************************ 00:21:57.806 END TEST nvmf_shutdown_tc2 00:21:57.806 ************************************ 00:21:57.806 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:57.806 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:57.806 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.806 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.806 ************************************ 00:21:57.806 START TEST nvmf_shutdown_tc3 00:21:57.806 ************************************ 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:57.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:57.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:57.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.806 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:57.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:57.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:21:57.807 00:21:57.807 --- 10.0.0.2 ping statistics --- 00:21:57.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.807 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:21:57.807 00:21:57.807 --- 10.0.0.1 ping statistics --- 00:21:57.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.807 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=114118 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 114118 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 114118 ']' 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.807 [2024-12-10 22:53:05.230964] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:57.807 [2024-12-10 22:53:05.231039] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.807 [2024-12-10 22:53:05.304462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.807 [2024-12-10 22:53:05.363368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.807 [2024-12-10 22:53:05.363422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.807 [2024-12-10 22:53:05.363451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.807 [2024-12-10 22:53:05.363462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.807 [2024-12-10 22:53:05.363472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.807 [2024-12-10 22:53:05.365060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.807 [2024-12-10 22:53:05.365123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.807 [2024-12-10 22:53:05.365187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:57.807 [2024-12-10 22:53:05.365190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.807 [2024-12-10 22:53:05.519397] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.807 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.089 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.089 Malloc1 00:21:58.089 [2024-12-10 22:53:05.620469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.089 Malloc2 00:21:58.089 Malloc3 00:21:58.089 Malloc4 00:21:58.089 Malloc5 00:21:58.347 Malloc6 00:21:58.347 Malloc7 00:21:58.347 Malloc8 00:21:58.347 Malloc9 00:21:58.347 Malloc10 00:21:58.347 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.347 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:58.347 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.347 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=114252 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 114252 /var/tmp/bdevperf.sock 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 114252 ']' 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.605 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.605 { 00:21:58.605 "params": { 00:21:58.605 "name": "Nvme$subsystem", 00:21:58.605 "trtype": "$TEST_TRANSPORT", 00:21:58.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.606 { 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme$subsystem", 00:21:58.606 "trtype": "$TEST_TRANSPORT", 00:21:58.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "$NVMF_PORT", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.606 "hdgst": ${hdgst:-false}, 00:21:58.606 "ddgst": ${ddgst:-false} 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 } 00:21:58.606 EOF 00:21:58.606 )") 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:58.606 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme1", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme2", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme3", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme4", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme5", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme6", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme7", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme8", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme9", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 },{ 00:21:58.606 "params": { 00:21:58.606 "name": "Nvme10", 00:21:58.606 "trtype": "tcp", 00:21:58.606 "traddr": "10.0.0.2", 00:21:58.606 "adrfam": "ipv4", 00:21:58.606 "trsvcid": "4420", 00:21:58.606 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:58.606 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:58.606 "hdgst": false, 00:21:58.606 "ddgst": false 00:21:58.606 }, 00:21:58.606 "method": "bdev_nvme_attach_controller" 00:21:58.606 }' 00:21:58.606 [2024-12-10 22:53:06.143270] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:58.606 [2024-12-10 22:53:06.143349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114252 ] 00:21:58.606 [2024-12-10 22:53:06.215226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.606 [2024-12-10 22:53:06.274708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.505 Running I/O for 10 seconds... 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:00.505 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:00.506 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:00.506 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:00.506 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.506 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:00.764 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.764 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:00.764 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:00.764 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:01.023 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:01.295 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:01.295 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:01.295 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.295 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.295 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 114118 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 114118 ']' 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 114118 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114118 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114118' 00:22:01.296 killing process with pid 114118 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 114118 00:22:01.296 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 114118 00:22:01.296 [2024-12-10 22:53:08.895354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5cf0 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.897950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1287120 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.899303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.899327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.899355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.899367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.899378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.296 [2024-12-10 22:53:08.899389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.899997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.900009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.900020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.900032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.900044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.900055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.900066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f61c0 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.901992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.297 [2024-12-10 22:53:08.902196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.902669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6690 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.903112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd06850 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.903336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89c330 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.903528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d7b0 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.903711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.298 [2024-12-10 22:53:08.903811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a8320 is same with the state(6) to be set 00:22:01.298 [2024-12-10 22:53:08.903907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.298 [2024-12-10 22:53:08.903929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.298 [2024-12-10 22:53:08.903971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.903987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.298 [2024-12-10 22:53:08.904001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.904016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.298 [2024-12-10 22:53:08.904030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.904046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.298 [2024-12-10 22:53:08.904064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.904080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.298 [2024-12-10 22:53:08.904094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.298 [2024-12-10 22:53:08.904109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.904986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.904998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.299 [2024-12-10 22:53:08.905276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.299 [2024-12-10 22:53:08.905290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.905789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.300 [2024-12-10 22:53:08.905802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.300 [2024-12-10 22:53:08.908262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.300 [2024-12-10 22:53:08.908313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a8320 (9): Bad file descriptor 00:22:01.300 [2024-12-10 22:53:08.909263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909582] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.300 [2024-12-10 22:53:08.909600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.300 [2024-12-10 22:53:08.909707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.909994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6b80 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910223] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.301 [2024-12-10 22:53:08.910373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.301 [2024-12-10 22:53:08.910402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a8320 with addr=10.0.0.2, port=4420 00:22:01.301 [2024-12-10 22:53:08.910419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a8320 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.910511] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.301 [2024-12-10 22:53:08.910967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a8320 (9): Bad file descriptor 00:22:01.301 [2024-12-10 22:53:08.911410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:01.301 [2024-12-10 22:53:08.911433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:01.301 [2024-12-10 22:53:08.911452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:01.301 [2024-12-10 22:53:08.911468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:01.301 [2024-12-10 22:53:08.912347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.912989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.301 [2024-12-10 22:53:08.913401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06850 (9): Bad file descriptor 00:22:01.302 [2024-12-10 22:53:08.913655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89c330 (9): Bad file descriptor 00:22:01.302 [2024-12-10 22:53:08.913698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f7050 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.913767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.913782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.913801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.913816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.913829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.913843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.913855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.913868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89c130 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.913922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89d7b0 (9): Bad file descriptor 00:22:01.302 [2024-12-10 22:53:08.913975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.913995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.914009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.914022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.914036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.914049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.914063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.302 [2024-12-10 22:53:08.914076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.302 [2024-12-10 22:53:08.914088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbd060 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915132] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.302 [2024-12-10 22:53:08.915380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.915990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.916002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.916017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.916030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.916041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.302 [2024-12-10 22:53:08.916056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f73d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.916411] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.303 [2024-12-10 22:53:08.917899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.917924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.917953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.917977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.917997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:08.918071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:08.918137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-12-10 22:53:08.918215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.303 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-12-10 22:53:08.918278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.303 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-12-10 22:53:08.918342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.303 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:08.918457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.303 [2024-12-10 22:53:08.918483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.303 [2024-12-10 22:53:08.918488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.303 [2024-12-10 22:53:08.918495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:08.918519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.304 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12[2024-12-10 22:53:08.918649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.304 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12[2024-12-10 22:53:08.918712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.304 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.304 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with the state(6) to be set 00:22:01.304 [2024-12-10 22:53:08.918812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12868d0 is same with [2024-12-10 22:53:08.918812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12the state(6) to be set 00:22:01.304 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.918974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.918988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.304 [2024-12-10 22:53:08.919252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.304 [2024-12-10 22:53:08.919267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with [2024-12-10 22:53:08.919648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12the state(6) to be set 00:22:01.305 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with [2024-12-10 22:53:08.919693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.305 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:12[2024-12-10 22:53:08.919803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with [2024-12-10 22:53:08.919888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:01.305 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.305 [2024-12-10 22:53:08.919949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.305 [2024-12-10 22:53:08.919948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8ec0 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.919999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.305 [2024-12-10 22:53:08.920449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.920982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286c50 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.921457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:01.306 [2024-12-10 22:53:08.921528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc32c0 (9): Bad file descriptor 00:22:01.306 [2024-12-10 22:53:08.921718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.306 [2024-12-10 22:53:08.922327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.306 [2024-12-10 22:53:08.922357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc32c0 with addr=10.0.0.2, port=4420 00:22:01.306 [2024-12-10 22:53:08.922373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc32c0 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.922465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.306 [2024-12-10 22:53:08.922491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a8320 with addr=10.0.0.2, port=4420 00:22:01.306 [2024-12-10 22:53:08.922506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a8320 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.922619] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.306 [2024-12-10 22:53:08.922715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc32c0 (9): Bad file descriptor 00:22:01.306 [2024-12-10 22:53:08.922747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a8320 (9): Bad file descriptor 00:22:01.306 [2024-12-10 22:53:08.922885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:01.306 [2024-12-10 22:53:08.922907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:01.306 [2024-12-10 22:53:08.922922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:01.306 [2024-12-10 22:53:08.922937] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:01.306 [2024-12-10 22:53:08.922951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:01.306 [2024-12-10 22:53:08.922963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:01.306 [2024-12-10 22:53:08.922975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:01.306 [2024-12-10 22:53:08.922987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:01.306 [2024-12-10 22:53:08.923147] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.306 [2024-12-10 22:53:08.923651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd06630 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.923822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.923933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.923950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01cd0 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.923984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89c130 (9): Bad file descriptor 00:22:01.306 [2024-12-10 22:53:08.924035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.924055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.924082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.924107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.306 [2024-12-10 22:53:08.924133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x810110 is same with the state(6) to be set 00:22:01.306 [2024-12-10 22:53:08.924180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbd060 (9): Bad file descriptor 00:22:01.306 [2024-12-10 22:53:08.924310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.306 [2024-12-10 22:53:08.924332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.306 [2024-12-10 22:53:08.924366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.306 [2024-12-10 22:53:08.924396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.306 [2024-12-10 22:53:08.924426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.306 [2024-12-10 22:53:08.924454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.306 [2024-12-10 22:53:08.924469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.306 [2024-12-10 22:53:08.924483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.924981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.924995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.307 [2024-12-10 22:53:08.925696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.307 [2024-12-10 22:53:08.925709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.925950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.925964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.940941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbec7a0 is same with the state(6) to be set 00:22:01.308 [2024-12-10 22:53:08.942324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.942971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.942984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.308 [2024-12-10 22:53:08.943000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.308 [2024-12-10 22:53:08.943013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.943978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.943992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.944008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.944021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.944040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.944055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.944070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.944084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.944099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.944113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.944128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.309 [2024-12-10 22:53:08.944141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.309 [2024-12-10 22:53:08.944157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.944171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.944185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.944199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.944214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.944228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.944243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.944257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.944273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.944286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.944300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc99900 is same with the state(6) to be set 00:22:01.310 [2024-12-10 22:53:08.945618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.945984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.945999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.310 [2024-12-10 22:53:08.946707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.310 [2024-12-10 22:53:08.946721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.946982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.946995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.947567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.947582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcadc40 is same with the state(6) to be set 00:22:01.311 [2024-12-10 22:53:08.948815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:01.311 [2024-12-10 22:53:08.948850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:01.311 [2024-12-10 22:53:08.948873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:01.311 [2024-12-10 22:53:08.949000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06630 (9): Bad file descriptor 00:22:01.311 [2024-12-10 22:53:08.949046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01cd0 (9): Bad file descriptor 00:22:01.311 [2024-12-10 22:53:08.949089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x810110 (9): Bad file descriptor 00:22:01.311 task offset: 24320 on job bdev=Nvme1n1 fails 00:22:01.311 1743.20 IOPS, 108.95 MiB/s [2024-12-10T21:53:09.043Z] [2024-12-10 22:53:08.965257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.311 [2024-12-10 22:53:08.965346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89c330 with addr=10.0.0.2, port=4420 00:22:01.311 [2024-12-10 22:53:08.965369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89c330 is same with the state(6) to be set 00:22:01.311 [2024-12-10 22:53:08.965498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.311 [2024-12-10 22:53:08.965523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89d7b0 with addr=10.0.0.2, port=4420 00:22:01.311 [2024-12-10 22:53:08.965539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d7b0 is same with the state(6) to be set 00:22:01.311 [2024-12-10 22:53:08.965639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.311 [2024-12-10 22:53:08.965664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd06850 with addr=10.0.0.2, port=4420 00:22:01.311 [2024-12-10 22:53:08.965679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd06850 is same with the state(6) to be set 00:22:01.311 [2024-12-10 22:53:08.965744] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:01.311 [2024-12-10 22:53:08.965768] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:01.311 [2024-12-10 22:53:08.965806] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:01.311 [2024-12-10 22:53:08.965836] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:01.311 [2024-12-10 22:53:08.965865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06850 (9): Bad file descriptor 00:22:01.311 [2024-12-10 22:53:08.965896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89d7b0 (9): Bad file descriptor 00:22:01.311 [2024-12-10 22:53:08.965920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89c330 (9): Bad file descriptor 00:22:01.311 [2024-12-10 22:53:08.966566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.966595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.311 [2024-12-10 22:53:08.966627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.311 [2024-12-10 22:53:08.966643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.966980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.966995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.312 [2024-12-10 22:53:08.967853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.312 [2024-12-10 22:53:08.967870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.967884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.967899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.967913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.967929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.967943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.967959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.967972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.967988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.968502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.968516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6a00 is same with the state(6) to be set 00:22:01.313 [2024-12-10 22:53:08.969792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.969814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.969835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.969849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.969871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.969885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.969901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.969914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.969930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.969944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.969960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.969973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.969988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.313 [2024-12-10 22:53:08.970207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.313 [2024-12-10 22:53:08.970223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.970979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.970997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.314 [2024-12-10 22:53:08.971432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.314 [2024-12-10 22:53:08.971446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.971721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.971736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7c50 is same with the state(6) to be set 00:22:01.315 [2024-12-10 22:53:08.973262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.315 [2024-12-10 22:53:08.973293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:01.315 [2024-12-10 22:53:08.973338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:01.315 [2024-12-10 22:53:08.973554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.973973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.973989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.315 [2024-12-10 22:53:08.974446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.315 [2024-12-10 22:53:08.974462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.974976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.974990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.975345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.975359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab640 is same with the state(6) to be set 00:22:01.316 [2024-12-10 22:53:08.976591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.316 [2024-12-10 22:53:08.976905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.316 [2024-12-10 22:53:08.976918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.976934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.976947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.976962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.976976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.976992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.317 [2024-12-10 22:53:08.977967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.317 [2024-12-10 22:53:08.977981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.977997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.318 [2024-12-10 22:53:08.978479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.318 [2024-12-10 22:53:08.978493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac910 is same with the state(6) to be set 00:22:01.318 [2024-12-10 22:53:08.980875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:01.318 [2024-12-10 22:53:08.980912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:01.318 [2024-12-10 22:53:08.980939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:01.318 00:22:01.318 Latency(us) 00:22:01.318 [2024-12-10T21:53:09.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.318 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme1n1 ended in about 0.96 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme1n1 : 0.96 197.69 12.36 66.59 0.00 239577.75 4490.43 256318.58 00:22:01.318 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme2n1 ended in about 1.00 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme2n1 : 1.00 128.58 8.04 64.29 0.00 322439.14 19903.53 260978.92 00:22:01.318 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme3n1 ended in about 1.00 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme3n1 : 1.00 192.23 12.01 64.08 0.00 237993.53 18738.44 251658.24 00:22:01.318 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme4n1 ended in about 1.02 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme4n1 : 1.02 187.69 11.73 62.56 0.00 239453.87 17282.09 259425.47 00:22:01.318 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme5n1 ended in about 1.03 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme5n1 : 1.03 187.10 11.69 62.37 0.00 235710.96 22039.51 250104.79 00:22:01.318 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme6n1 ended in about 0.97 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme6n1 : 0.97 196.97 12.31 65.66 0.00 218408.01 23495.87 233016.89 00:22:01.318 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme7n1 : 0.97 198.57 12.41 0.00 0.00 282557.50 18058.81 259425.47 00:22:01.318 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme8n1 ended in about 1.03 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme8n1 : 1.03 190.33 11.90 58.26 0.00 222210.09 18252.99 257872.02 00:22:01.318 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme9n1 ended in about 1.03 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme9n1 : 1.03 123.92 7.75 61.96 0.00 292882.71 20777.34 274959.93 00:22:01.318 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.318 Job: Nvme10n1 ended in about 1.00 seconds with error 00:22:01.318 Verification LBA range: start 0x0 length 0x400 00:22:01.318 Nvme10n1 : 1.00 127.73 7.98 63.87 0.00 276650.16 22233.69 290494.39 00:22:01.318 [2024-12-10T21:53:09.050Z] =================================================================================================================== 00:22:01.318 [2024-12-10T21:53:09.050Z] Total : 1730.81 108.18 569.63 0.00 252705.99 4490.43 290494.39 00:22:01.318 [2024-12-10 22:53:09.010538] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:01.318 [2024-12-10 22:53:09.010644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:01.318 [2024-12-10 22:53:09.010941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.318 [2024-12-10 22:53:09.010988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a8320 with addr=10.0.0.2, port=4420 00:22:01.318 [2024-12-10 22:53:09.011021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a8320 is same with the state(6) to be set 00:22:01.318 [2024-12-10 22:53:09.011144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.318 [2024-12-10 22:53:09.011172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc32c0 with addr=10.0.0.2, port=4420 00:22:01.318 [2024-12-10 22:53:09.011189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc32c0 is same with the state(6) to be set 00:22:01.318 [2024-12-10 22:53:09.011299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.577 [2024-12-10 22:53:09.011325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89c130 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.011341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89c130 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.011358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.011371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.011389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.011407] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.011423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.011436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.011448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.011461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.011474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.011486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.011498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.011511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.012516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.578 [2024-12-10 22:53:09.012564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbd060 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.012594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbd060 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.012750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.578 [2024-12-10 22:53:09.012788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x810110 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.012818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x810110 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.012931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.578 [2024-12-10 22:53:09.012966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01cd0 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.012999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01cd0 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.013111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.578 [2024-12-10 22:53:09.013145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd06630 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.013167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd06630 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.013204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a8320 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.013248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc32c0 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.013283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89c130 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.013313] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:01.578 [2024-12-10 22:53:09.013342] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:01.578 [2024-12-10 22:53:09.013370] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:01.578 [2024-12-10 22:53:09.013416] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:01.578 [2024-12-10 22:53:09.013451] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:01.578 [2024-12-10 22:53:09.013488] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:01.578 [2024-12-10 22:53:09.014166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:01.578 [2024-12-10 22:53:09.014198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:01.578 [2024-12-10 22:53:09.014217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:01.578 [2024-12-10 22:53:09.014285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbd060 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.014312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x810110 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.014340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01cd0 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.014358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06630 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.014374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.014387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.014400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.014413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.014426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.014438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.014451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.014462] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.014481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.014493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.014506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.014528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.014703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.578 [2024-12-10 22:53:09.014731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd06850 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.014747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd06850 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.014830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.578 [2024-12-10 22:53:09.014860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89d7b0 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.014876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d7b0 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.014954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.578 [2024-12-10 22:53:09.014980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89c330 with addr=10.0.0.2, port=4420 00:22:01.578 [2024-12-10 22:53:09.014996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89c330 is same with the state(6) to be set 00:22:01.578 [2024-12-10 22:53:09.015011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.015030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.015047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.015060] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.015074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.015087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.015099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.015110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.015123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.015135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.015148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.015159] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.015172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.015184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.015196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.015211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.015285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06850 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.015310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89d7b0 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.015328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89c330 (9): Bad file descriptor 00:22:01.578 [2024-12-10 22:53:09.015368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.015385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.015400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.015421] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:01.578 [2024-12-10 22:53:09.015435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:01.578 [2024-12-10 22:53:09.015448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:01.578 [2024-12-10 22:53:09.015460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:01.578 [2024-12-10 22:53:09.015472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:01.579 [2024-12-10 22:53:09.015485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:01.579 [2024-12-10 22:53:09.015496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:01.579 [2024-12-10 22:53:09.015509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:01.579 [2024-12-10 22:53:09.015520] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:01.839 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 114252 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 114252 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 114252 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.774 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.774 rmmod nvme_tcp 00:22:03.035 rmmod nvme_fabrics 00:22:03.035 rmmod nvme_keyring 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 114118 ']' 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 114118 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 114118 ']' 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 114118 00:22:03.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (114118) - No such process 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 114118 is not found' 00:22:03.035 Process with pid 114118 is not found 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.035 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.940 00:22:04.940 real 0m7.583s 00:22:04.940 user 0m19.023s 00:22:04.940 sys 0m1.513s 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.940 ************************************ 00:22:04.940 END TEST nvmf_shutdown_tc3 00:22:04.940 ************************************ 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:04.940 ************************************ 00:22:04.940 START TEST nvmf_shutdown_tc4 00:22:04.940 ************************************ 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.940 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:04.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:04.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.941 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:05.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:05.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:22:05.200 00:22:05.200 --- 10.0.0.2 ping statistics --- 00:22:05.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.200 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:05.200 00:22:05.200 --- 10.0.0.1 ping statistics --- 00:22:05.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.200 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.200 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=115151 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 115151 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 115151 ']' 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.459 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.459 [2024-12-10 22:53:12.992441] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:05.459 [2024-12-10 22:53:12.992512] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.459 [2024-12-10 22:53:13.064640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.459 [2024-12-10 22:53:13.125311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.459 [2024-12-10 22:53:13.125375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.459 [2024-12-10 22:53:13.125403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.459 [2024-12-10 22:53:13.125414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.459 [2024-12-10 22:53:13.125424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.459 [2024-12-10 22:53:13.127009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.459 [2024-12-10 22:53:13.127069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.459 [2024-12-10 22:53:13.127138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:05.459 [2024-12-10 22:53:13.127140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.717 [2024-12-10 22:53:13.274698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.717 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.717 Malloc1 00:22:05.717 [2024-12-10 22:53:13.369693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.717 Malloc2 00:22:05.717 Malloc3 00:22:05.975 Malloc4 00:22:05.975 Malloc5 00:22:05.975 Malloc6 00:22:05.975 Malloc7 00:22:05.975 Malloc8 00:22:06.233 Malloc9 00:22:06.233 Malloc10 00:22:06.233 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.233 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:06.233 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.233 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:06.233 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=115275 00:22:06.233 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:06.233 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:06.233 [2024-12-10 22:53:13.878921] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 115151 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 115151 ']' 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 115151 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115151 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115151' 00:22:11.502 killing process with pid 115151 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 115151 00:22:11.502 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 115151 00:22:11.502 [2024-12-10 22:53:18.868141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d4b0 is same with the state(6) to be set 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 [2024-12-10 22:53:18.869638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.502 [2024-12-10 22:53:18.869979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 starting I/O failed: -6 00:22:11.502 [2024-12-10 22:53:18.870098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.870178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114de70 is same with the state(6) to be set 00:22:11.502 starting I/O failed: -6 00:22:11.502 starting I/O failed: -6 00:22:11.502 NVMe io qpair process completion error 00:22:11.502 [2024-12-10 22:53:18.871960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.871992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.872226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e810 is same with the state(6) to be set 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 [2024-12-10 22:53:18.885273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac690 is same with the state(6) to be set 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 [2024-12-10 22:53:18.885379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac690 is same with the state(6) to be set 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 [2024-12-10 22:53:18.885398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac690 is same with the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.885411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac690 is same with Write completed with error (sct=0, sc=8) 00:22:11.502 the state(6) to be set 00:22:11.502 [2024-12-10 22:53:18.885425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac690 is same with the state(6) to be set 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 [2024-12-10 22:53:18.885437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac690 is same with the state(6) to be set 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 Write completed with error (sct=0, sc=8) 00:22:11.502 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 [2024-12-10 22:53:18.886008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.503 [2024-12-10 22:53:18.886100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with the state(6) to be set 00:22:11.503 [2024-12-10 22:53:18.886133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with the state(6) to be set 00:22:11.503 [2024-12-10 22:53:18.886149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with the state(6) to be set 00:22:11.503 [2024-12-10 22:53:18.886162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with Write completed with error (sct=0, sc=8) 00:22:11.503 the state(6) to be set 00:22:11.503 starting I/O failed: -6 00:22:11.503 [2024-12-10 22:53:18.886177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with the state(6) to be set 00:22:11.503 [2024-12-10 22:53:18.886197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with Write completed with error (sct=0, sc=8) 00:22:11.503 the state(6) to be set 00:22:11.503 starting I/O failed: -6 00:22:11.503 [2024-12-10 22:53:18.886210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with the state(6) to be set 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 [2024-12-10 22:53:18.886223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb60 is same with the state(6) to be set 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 [2024-12-10 22:53:18.886996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with the state(6) to be set 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 [2024-12-10 22:53:18.887030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with the state(6) to be set 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 [2024-12-10 22:53:18.887045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with the state(6) to be set 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 [2024-12-10 22:53:18.887057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with the state(6) to be set 00:22:11.503 starting I/O failed: -6 00:22:11.503 [2024-12-10 22:53:18.887071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with the state(6) to be set 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 [2024-12-10 22:53:18.887083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with starting I/O failed: -6 00:22:11.503 the state(6) to be set 00:22:11.503 [2024-12-10 22:53:18.887096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with the state(6) to be set 00:22:11.503 [2024-12-10 22:53:18.887107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abcf0 is same with the state(6) to be set 00:22:11.503 [2024-12-10 22:53:18.887116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 [2024-12-10 22:53:18.888208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.503 starting I/O failed: -6 00:22:11.503 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 [2024-12-10 22:53:18.889941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.504 NVMe io qpair process completion error 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 [2024-12-10 22:53:18.891339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 [2024-12-10 22:53:18.891902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aed10 is same with the state(6) to be set 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 [2024-12-10 22:53:18.891934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aed10 is same with the state(6) to be set 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 [2024-12-10 22:53:18.891948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aed10 is same with the state(6) to be set 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 [2024-12-10 22:53:18.891961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aed10 is same with the state(6) to be set 00:22:11.504 starting I/O failed: -6 00:22:11.504 [2024-12-10 22:53:18.891972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aed10 is same with the state(6) to be set 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 [2024-12-10 22:53:18.891984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aed10 is same with the state(6) to be set 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 Write completed with error (sct=0, sc=8) 00:22:11.504 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.892400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.505 [2024-12-10 22:53:18.892480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 [2024-12-10 22:53:18.892510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 [2024-12-10 22:53:18.892524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.892540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.892617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 starting I/O failed: -6 00:22:11.505 [2024-12-10 22:53:18.892634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.892646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 starting I/O failed: -6 00:22:11.505 [2024-12-10 22:53:18.892658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.892669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af1e0 is same with the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.893372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with starting I/O failed: -6 00:22:11.505 the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.893403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with starting I/O failed: -6 00:22:11.505 the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.893418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with the state(6) to be set 00:22:11.505 starting I/O failed: -6 00:22:11.505 [2024-12-10 22:53:18.893432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.893444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with the state(6) to be set 00:22:11.505 [2024-12-10 22:53:18.893455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with Write completed with error (sct=0, sc=8) 00:22:11.505 the state(6) to be set 00:22:11.505 starting I/O failed: -6 00:22:11.505 [2024-12-10 22:53:18.893468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with the state(6) to be set 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 [2024-12-10 22:53:18.893479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ae370 is same with the state(6) to be set 00:22:11.505 starting I/O failed: -6 00:22:11.505 [2024-12-10 22:53:18.893513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.505 starting I/O failed: -6 00:22:11.505 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 [2024-12-10 22:53:18.895169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.506 NVMe io qpair process completion error 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 [2024-12-10 22:53:18.896551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 [2024-12-10 22:53:18.897461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.506 starting I/O failed: -6 00:22:11.506 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 [2024-12-10 22:53:18.898625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 [2024-12-10 22:53:18.900733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.507 NVMe io qpair process completion error 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 [2024-12-10 22:53:18.901930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 starting I/O failed: -6 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.507 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 [2024-12-10 22:53:18.902907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 [2024-12-10 22:53:18.904055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.508 starting I/O failed: -6 00:22:11.508 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 [2024-12-10 22:53:18.906138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.509 NVMe io qpair process completion error 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 Write completed with error (sct=0, sc=8) 00:22:11.509 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 starting I/O failed: -6 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.510 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 [2024-12-10 22:53:18.914685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.511 starting I/O failed: -6 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 [2024-12-10 22:53:18.915787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.511 starting I/O failed: -6 00:22:11.511 starting I/O failed: -6 00:22:11.511 starting I/O failed: -6 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 [2024-12-10 22:53:18.917131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.511 starting I/O failed: -6 00:22:11.511 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 [2024-12-10 22:53:18.919755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.512 NVMe io qpair process completion error 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 [2024-12-10 22:53:18.920955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 [2024-12-10 22:53:18.922028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.512 starting I/O failed: -6 00:22:11.512 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 [2024-12-10 22:53:18.923218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 [2024-12-10 22:53:18.925224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.513 NVMe io qpair process completion error 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 Write completed with error (sct=0, sc=8) 00:22:11.513 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 [2024-12-10 22:53:18.926492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 [2024-12-10 22:53:18.927552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 [2024-12-10 22:53:18.928700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.514 starting I/O failed: -6 00:22:11.514 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 [2024-12-10 22:53:18.930692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.515 NVMe io qpair process completion error 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 [2024-12-10 22:53:18.931888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 [2024-12-10 22:53:18.933003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.515 Write completed with error (sct=0, sc=8) 00:22:11.515 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 [2024-12-10 22:53:18.934189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 Write completed with error (sct=0, sc=8) 00:22:11.516 starting I/O failed: -6 00:22:11.516 [2024-12-10 22:53:18.937628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:11.516 NVMe io qpair process completion error 00:22:11.516 Initializing NVMe Controllers 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:11.516 Controller IO queue size 128, less than required. 00:22:11.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:11.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:11.517 Initialization complete. Launching workers. 00:22:11.517 ======================================================== 00:22:11.517 Latency(us) 00:22:11.517 Device Information : IOPS MiB/s Average min max 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1856.41 79.77 68970.05 1130.40 120570.77 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1867.98 80.26 68565.66 984.40 120571.34 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1884.56 80.98 67981.64 909.85 142436.30 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1904.64 81.84 67288.35 820.89 117142.13 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1888.27 81.14 67824.48 566.65 122808.03 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1813.85 77.94 70641.58 941.98 121109.30 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1811.89 77.85 70761.43 825.20 123866.26 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1805.12 77.56 71064.39 1104.70 127198.10 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1836.77 78.92 69867.45 920.44 129789.78 00:22:11.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1839.17 79.03 69806.11 838.34 112329.62 00:22:11.517 ======================================================== 00:22:11.517 Total : 18508.67 795.29 69254.24 566.65 142436.30 00:22:11.517 00:22:11.517 [2024-12-10 22:53:18.943520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eae0 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.943637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c9e0 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.943696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6d5f0 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.943752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c6b0 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.943820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e720 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.943890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e900 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.943953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6d2c0 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.944007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6cd10 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.944061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6dc50 is same with the state(6) to be set 00:22:11.517 [2024-12-10 22:53:18.944115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6d920 is same with the state(6) to be set 00:22:11.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:11.777 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 115275 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 115275 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 115275 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.716 rmmod nvme_tcp 00:22:12.716 rmmod nvme_fabrics 00:22:12.716 rmmod nvme_keyring 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 115151 ']' 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 115151 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 115151 ']' 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 115151 00:22:12.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (115151) - No such process 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 115151 is not found' 00:22:12.716 Process with pid 115151 is not found 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.716 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.717 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.717 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.717 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.717 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.250 00:22:15.250 real 0m9.823s 00:22:15.250 user 0m23.226s 00:22:15.250 sys 0m5.733s 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.250 ************************************ 00:22:15.250 END TEST nvmf_shutdown_tc4 00:22:15.250 ************************************ 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:15.250 00:22:15.250 real 0m37.794s 00:22:15.250 user 1m41.916s 00:22:15.250 sys 0m12.346s 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:15.250 ************************************ 00:22:15.250 END TEST nvmf_shutdown 00:22:15.250 ************************************ 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:15.250 ************************************ 00:22:15.250 START TEST nvmf_nsid 00:22:15.250 ************************************ 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:15.250 * Looking for test storage... 00:22:15.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.250 --rc genhtml_branch_coverage=1 00:22:15.250 --rc genhtml_function_coverage=1 00:22:15.250 --rc genhtml_legend=1 00:22:15.250 --rc geninfo_all_blocks=1 00:22:15.250 --rc geninfo_unexecuted_blocks=1 00:22:15.250 00:22:15.250 ' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.250 --rc genhtml_branch_coverage=1 00:22:15.250 --rc genhtml_function_coverage=1 00:22:15.250 --rc genhtml_legend=1 00:22:15.250 --rc geninfo_all_blocks=1 00:22:15.250 --rc geninfo_unexecuted_blocks=1 00:22:15.250 00:22:15.250 ' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.250 --rc genhtml_branch_coverage=1 00:22:15.250 --rc genhtml_function_coverage=1 00:22:15.250 --rc genhtml_legend=1 00:22:15.250 --rc geninfo_all_blocks=1 00:22:15.250 --rc geninfo_unexecuted_blocks=1 00:22:15.250 00:22:15.250 ' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.250 --rc genhtml_branch_coverage=1 00:22:15.250 --rc genhtml_function_coverage=1 00:22:15.250 --rc genhtml_legend=1 00:22:15.250 --rc geninfo_all_blocks=1 00:22:15.250 --rc geninfo_unexecuted_blocks=1 00:22:15.250 00:22:15.250 ' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.250 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.251 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:17.153 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.153 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:17.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:17.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:17.154 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.154 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:22:17.413 00:22:17.413 --- 10.0.0.2 ping statistics --- 00:22:17.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.413 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:22:17.413 00:22:17.413 --- 10.0.0.1 ping statistics --- 00:22:17.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.413 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=118014 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 118014 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 118014 ']' 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.413 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:17.413 [2024-12-10 22:53:25.015615] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:17.413 [2024-12-10 22:53:25.015683] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.413 [2024-12-10 22:53:25.085674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.413 [2024-12-10 22:53:25.138577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.413 [2024-12-10 22:53:25.138635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.413 [2024-12-10 22:53:25.138662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.413 [2024-12-10 22:53:25.138674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.413 [2024-12-10 22:53:25.138684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.413 [2024-12-10 22:53:25.139348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=118034 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=f32dfa0f-2ee5-43c9-a56e-cacb95e238cd 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a2859237-3005-451e-ac1c-7a5a350ac213 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f2420b1e-4904-46b3-9511-bbf02b51edc9 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.671 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:17.671 null0 00:22:17.671 null1 00:22:17.671 null2 00:22:17.671 [2024-12-10 22:53:25.358332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.671 [2024-12-10 22:53:25.381362] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:17.671 [2024-12-10 22:53:25.381449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118034 ] 00:22:17.671 [2024-12-10 22:53:25.382576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 118034 /var/tmp/tgt2.sock 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 118034 ']' 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:17.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.929 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:17.929 [2024-12-10 22:53:25.456305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.929 [2024-12-10 22:53:25.513087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.187 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.187 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:18.187 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:18.444 [2024-12-10 22:53:26.166606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.702 [2024-12-10 22:53:26.182798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:18.702 nvme0n1 nvme0n2 00:22:18.702 nvme1n1 00:22:18.702 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:18.702 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:18.702 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:19.267 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:19.268 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:19.268 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid f32dfa0f-2ee5-43c9-a56e-cacb95e238cd 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:20.200 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f32dfa0f2ee543c9a56ecacb95e238cd 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F32DFA0F2EE543C9A56ECACB95E238CD 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ F32DFA0F2EE543C9A56ECACB95E238CD == \F\3\2\D\F\A\0\F\2\E\E\5\4\3\C\9\A\5\6\E\C\A\C\B\9\5\E\2\3\8\C\D ]] 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a2859237-3005-451e-ac1c-7a5a350ac213 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a28592373005451eac1c7a5a350ac213 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A28592373005451EAC1C7A5A350AC213 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A28592373005451EAC1C7A5A350AC213 == \A\2\8\5\9\2\3\7\3\0\0\5\4\5\1\E\A\C\1\C\7\A\5\A\3\5\0\A\C\2\1\3 ]] 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f2420b1e-4904-46b3-9511-bbf02b51edc9 00:22:20.201 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f2420b1e490446b39511bbf02b51edc9 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F2420B1E490446B39511BBF02B51EDC9 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F2420B1E490446B39511BBF02B51EDC9 == \F\2\4\2\0\B\1\E\4\9\0\4\4\6\B\3\9\5\1\1\B\B\F\0\2\B\5\1\E\D\C\9 ]] 00:22:20.459 22:53:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 118034 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 118034 ']' 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 118034 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118034 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118034' 00:22:20.459 killing process with pid 118034 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 118034 00:22:20.459 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 118034 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.024 rmmod nvme_tcp 00:22:21.024 rmmod nvme_fabrics 00:22:21.024 rmmod nvme_keyring 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 118014 ']' 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 118014 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 118014 ']' 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 118014 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118014 00:22:21.024 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.025 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.025 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118014' 00:22:21.025 killing process with pid 118014 00:22:21.025 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 118014 00:22:21.025 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 118014 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.284 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.253 22:53:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.253 00:22:23.253 real 0m8.421s 00:22:23.253 user 0m8.348s 00:22:23.253 sys 0m2.688s 00:22:23.253 22:53:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.253 22:53:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:23.253 ************************************ 00:22:23.253 END TEST nvmf_nsid 00:22:23.253 ************************************ 00:22:23.511 22:53:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:23.511 00:22:23.511 real 11m41.965s 00:22:23.511 user 27m36.673s 00:22:23.511 sys 2m46.037s 00:22:23.511 22:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.511 22:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.511 ************************************ 00:22:23.511 END TEST nvmf_target_extra 00:22:23.511 ************************************ 00:22:23.511 22:53:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:23.511 22:53:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:23.511 22:53:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.511 22:53:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.511 ************************************ 00:22:23.511 START TEST nvmf_host 00:22:23.511 ************************************ 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:23.511 * Looking for test storage... 00:22:23.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.511 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:23.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.512 --rc genhtml_branch_coverage=1 00:22:23.512 --rc genhtml_function_coverage=1 00:22:23.512 --rc genhtml_legend=1 00:22:23.512 --rc geninfo_all_blocks=1 00:22:23.512 --rc geninfo_unexecuted_blocks=1 00:22:23.512 00:22:23.512 ' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:23.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.512 --rc genhtml_branch_coverage=1 00:22:23.512 --rc genhtml_function_coverage=1 00:22:23.512 --rc genhtml_legend=1 00:22:23.512 --rc geninfo_all_blocks=1 00:22:23.512 --rc geninfo_unexecuted_blocks=1 00:22:23.512 00:22:23.512 ' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:23.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.512 --rc genhtml_branch_coverage=1 00:22:23.512 --rc genhtml_function_coverage=1 00:22:23.512 --rc genhtml_legend=1 00:22:23.512 --rc geninfo_all_blocks=1 00:22:23.512 --rc geninfo_unexecuted_blocks=1 00:22:23.512 00:22:23.512 ' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:23.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.512 --rc genhtml_branch_coverage=1 00:22:23.512 --rc genhtml_function_coverage=1 00:22:23.512 --rc genhtml_legend=1 00:22:23.512 --rc geninfo_all_blocks=1 00:22:23.512 --rc geninfo_unexecuted_blocks=1 00:22:23.512 00:22:23.512 ' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.512 ************************************ 00:22:23.512 START TEST nvmf_multicontroller 00:22:23.512 ************************************ 00:22:23.512 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:23.771 * Looking for test storage... 00:22:23.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.771 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.772 --rc genhtml_branch_coverage=1 00:22:23.772 --rc genhtml_function_coverage=1 00:22:23.772 --rc genhtml_legend=1 00:22:23.772 --rc geninfo_all_blocks=1 00:22:23.772 --rc geninfo_unexecuted_blocks=1 00:22:23.772 00:22:23.772 ' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.772 --rc genhtml_branch_coverage=1 00:22:23.772 --rc genhtml_function_coverage=1 00:22:23.772 --rc genhtml_legend=1 00:22:23.772 --rc geninfo_all_blocks=1 00:22:23.772 --rc geninfo_unexecuted_blocks=1 00:22:23.772 00:22:23.772 ' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.772 --rc genhtml_branch_coverage=1 00:22:23.772 --rc genhtml_function_coverage=1 00:22:23.772 --rc genhtml_legend=1 00:22:23.772 --rc geninfo_all_blocks=1 00:22:23.772 --rc geninfo_unexecuted_blocks=1 00:22:23.772 00:22:23.772 ' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:23.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.772 --rc genhtml_branch_coverage=1 00:22:23.772 --rc genhtml_function_coverage=1 00:22:23.772 --rc genhtml_legend=1 00:22:23.772 --rc geninfo_all_blocks=1 00:22:23.772 --rc geninfo_unexecuted_blocks=1 00:22:23.772 00:22:23.772 ' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.772 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.773 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.773 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.773 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.773 22:53:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.674 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:25.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:25.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:25.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:25.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.675 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:22:25.934 00:22:25.934 --- 10.0.0.2 ping statistics --- 00:22:25.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.934 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:25.934 00:22:25.934 --- 10.0.0.1 ping statistics --- 00:22:25.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.934 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=120540 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 120540 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 120540 ']' 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.934 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.934 [2024-12-10 22:53:33.569709] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:25.934 [2024-12-10 22:53:33.569793] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.934 [2024-12-10 22:53:33.641134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:26.193 [2024-12-10 22:53:33.700637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.193 [2024-12-10 22:53:33.700683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.193 [2024-12-10 22:53:33.700698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.193 [2024-12-10 22:53:33.700710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.193 [2024-12-10 22:53:33.700720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.193 [2024-12-10 22:53:33.702047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.193 [2024-12-10 22:53:33.702129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.193 [2024-12-10 22:53:33.702134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 [2024-12-10 22:53:33.840700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 Malloc0 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 [2024-12-10 22:53:33.897924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 [2024-12-10 22:53:33.905773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.193 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 Malloc1 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=120623 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 120623 /var/tmp/bdevperf.sock 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 120623 ']' 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.451 22:53:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.709 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.709 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:26.709 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:26.709 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.709 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.967 NVMe0n1 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.967 1 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.967 request: 00:22:26.967 { 00:22:26.967 "name": "NVMe0", 00:22:26.967 "trtype": "tcp", 00:22:26.967 "traddr": "10.0.0.2", 00:22:26.967 "adrfam": "ipv4", 00:22:26.967 "trsvcid": "4420", 00:22:26.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.967 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:26.967 "hostaddr": "10.0.0.1", 00:22:26.967 "prchk_reftag": false, 00:22:26.967 "prchk_guard": false, 00:22:26.967 "hdgst": false, 00:22:26.967 "ddgst": false, 00:22:26.967 "allow_unrecognized_csi": false, 00:22:26.967 "method": "bdev_nvme_attach_controller", 00:22:26.967 "req_id": 1 00:22:26.967 } 00:22:26.967 Got JSON-RPC error response 00:22:26.967 response: 00:22:26.967 { 00:22:26.967 "code": -114, 00:22:26.967 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:26.967 } 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.967 request: 00:22:26.967 { 00:22:26.967 "name": "NVMe0", 00:22:26.967 "trtype": "tcp", 00:22:26.967 "traddr": "10.0.0.2", 00:22:26.967 "adrfam": "ipv4", 00:22:26.967 "trsvcid": "4420", 00:22:26.967 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:26.967 "hostaddr": "10.0.0.1", 00:22:26.967 "prchk_reftag": false, 00:22:26.967 "prchk_guard": false, 00:22:26.967 "hdgst": false, 00:22:26.967 "ddgst": false, 00:22:26.967 "allow_unrecognized_csi": false, 00:22:26.967 "method": "bdev_nvme_attach_controller", 00:22:26.967 "req_id": 1 00:22:26.967 } 00:22:26.967 Got JSON-RPC error response 00:22:26.967 response: 00:22:26.967 { 00:22:26.967 "code": -114, 00:22:26.967 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:26.967 } 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.967 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.968 request: 00:22:26.968 { 00:22:26.968 "name": "NVMe0", 00:22:26.968 "trtype": "tcp", 00:22:26.968 "traddr": "10.0.0.2", 00:22:26.968 "adrfam": "ipv4", 00:22:26.968 "trsvcid": "4420", 00:22:26.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.968 "hostaddr": "10.0.0.1", 00:22:26.968 "prchk_reftag": false, 00:22:26.968 "prchk_guard": false, 00:22:26.968 "hdgst": false, 00:22:26.968 "ddgst": false, 00:22:26.968 "multipath": "disable", 00:22:26.968 "allow_unrecognized_csi": false, 00:22:26.968 "method": "bdev_nvme_attach_controller", 00:22:26.968 "req_id": 1 00:22:26.968 } 00:22:26.968 Got JSON-RPC error response 00:22:26.968 response: 00:22:26.968 { 00:22:26.968 "code": -114, 00:22:26.968 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:26.968 } 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.968 request: 00:22:26.968 { 00:22:26.968 "name": "NVMe0", 00:22:26.968 "trtype": "tcp", 00:22:26.968 "traddr": "10.0.0.2", 00:22:26.968 "adrfam": "ipv4", 00:22:26.968 "trsvcid": "4420", 00:22:26.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.968 "hostaddr": "10.0.0.1", 00:22:26.968 "prchk_reftag": false, 00:22:26.968 "prchk_guard": false, 00:22:26.968 "hdgst": false, 00:22:26.968 "ddgst": false, 00:22:26.968 "multipath": "failover", 00:22:26.968 "allow_unrecognized_csi": false, 00:22:26.968 "method": "bdev_nvme_attach_controller", 00:22:26.968 "req_id": 1 00:22:26.968 } 00:22:26.968 Got JSON-RPC error response 00:22:26.968 response: 00:22:26.968 { 00:22:26.968 "code": -114, 00:22:26.968 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:26.968 } 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.968 NVMe0n1 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.968 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.226 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:27.226 22:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:28.600 { 00:22:28.600 "results": [ 00:22:28.600 { 00:22:28.600 "job": "NVMe0n1", 00:22:28.600 "core_mask": "0x1", 00:22:28.600 "workload": "write", 00:22:28.600 "status": "finished", 00:22:28.600 "queue_depth": 128, 00:22:28.600 "io_size": 4096, 00:22:28.600 "runtime": 1.004744, 00:22:28.600 "iops": 18456.44263613418, 00:22:28.600 "mibps": 72.09547904739914, 00:22:28.600 "io_failed": 0, 00:22:28.600 "io_timeout": 0, 00:22:28.600 "avg_latency_us": 6923.075744735244, 00:22:28.600 "min_latency_us": 5946.785185185186, 00:22:28.600 "max_latency_us": 14563.555555555555 00:22:28.600 } 00:22:28.600 ], 00:22:28.600 "core_count": 1 00:22:28.600 } 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 120623 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 120623 ']' 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 120623 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120623 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120623' 00:22:28.600 killing process with pid 120623 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 120623 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 120623 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.600 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:28.858 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:28.858 [2024-12-10 22:53:34.013495] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:28.858 [2024-12-10 22:53:34.013605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120623 ] 00:22:28.858 [2024-12-10 22:53:34.082473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.858 [2024-12-10 22:53:34.142094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.858 [2024-12-10 22:53:34.898065] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 5804e5a3-689f-4a4b-8b1a-434e8ac73c9d already exists 00:22:28.858 [2024-12-10 22:53:34.898102] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:5804e5a3-689f-4a4b-8b1a-434e8ac73c9d alias for bdev NVMe1n1 00:22:28.858 [2024-12-10 22:53:34.898116] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:28.858 Running I/O for 1 seconds... 00:22:28.858 18416.00 IOPS, 71.94 MiB/s 00:22:28.858 Latency(us) 00:22:28.858 [2024-12-10T21:53:36.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.858 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:28.858 NVMe0n1 : 1.00 18456.44 72.10 0.00 0.00 6923.08 5946.79 14563.56 00:22:28.858 [2024-12-10T21:53:36.590Z] =================================================================================================================== 00:22:28.858 [2024-12-10T21:53:36.590Z] Total : 18456.44 72.10 0.00 0.00 6923.08 5946.79 14563.56 00:22:28.858 Received shutdown signal, test time was about 1.000000 seconds 00:22:28.858 00:22:28.858 Latency(us) 00:22:28.858 [2024-12-10T21:53:36.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.858 [2024-12-10T21:53:36.590Z] =================================================================================================================== 00:22:28.858 [2024-12-10T21:53:36.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.858 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.858 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.859 rmmod nvme_tcp 00:22:28.859 rmmod nvme_fabrics 00:22:28.859 rmmod nvme_keyring 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 120540 ']' 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 120540 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 120540 ']' 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 120540 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120540 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120540' 00:22:28.859 killing process with pid 120540 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 120540 00:22:28.859 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 120540 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.118 22:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.025 22:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.025 00:22:31.025 real 0m7.509s 00:22:31.025 user 0m12.044s 00:22:31.025 sys 0m2.332s 00:22:31.025 22:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.025 22:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.025 ************************************ 00:22:31.025 END TEST nvmf_multicontroller 00:22:31.025 ************************************ 00:22:31.025 22:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:31.025 22:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:31.025 22:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.025 22:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.284 ************************************ 00:22:31.284 START TEST nvmf_aer 00:22:31.284 ************************************ 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:31.284 * Looking for test storage... 00:22:31.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:31.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.284 --rc genhtml_branch_coverage=1 00:22:31.284 --rc genhtml_function_coverage=1 00:22:31.284 --rc genhtml_legend=1 00:22:31.284 --rc geninfo_all_blocks=1 00:22:31.284 --rc geninfo_unexecuted_blocks=1 00:22:31.284 00:22:31.284 ' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:31.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.284 --rc genhtml_branch_coverage=1 00:22:31.284 --rc genhtml_function_coverage=1 00:22:31.284 --rc genhtml_legend=1 00:22:31.284 --rc geninfo_all_blocks=1 00:22:31.284 --rc geninfo_unexecuted_blocks=1 00:22:31.284 00:22:31.284 ' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:31.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.284 --rc genhtml_branch_coverage=1 00:22:31.284 --rc genhtml_function_coverage=1 00:22:31.284 --rc genhtml_legend=1 00:22:31.284 --rc geninfo_all_blocks=1 00:22:31.284 --rc geninfo_unexecuted_blocks=1 00:22:31.284 00:22:31.284 ' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:31.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.284 --rc genhtml_branch_coverage=1 00:22:31.284 --rc genhtml_function_coverage=1 00:22:31.284 --rc genhtml_legend=1 00:22:31.284 --rc geninfo_all_blocks=1 00:22:31.284 --rc geninfo_unexecuted_blocks=1 00:22:31.284 00:22:31.284 ' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.284 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.285 22:53:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.816 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:33.817 00:22:33.817 --- 10.0.0.2 ping statistics --- 00:22:33.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.817 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:33.817 00:22:33.817 --- 10.0.0.1 ping statistics --- 00:22:33.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.817 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=122854 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 122854 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 122854 ']' 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.817 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.817 [2024-12-10 22:53:41.358899] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:33.817 [2024-12-10 22:53:41.358990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.817 [2024-12-10 22:53:41.434038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.817 [2024-12-10 22:53:41.497141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.817 [2024-12-10 22:53:41.497195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.817 [2024-12-10 22:53:41.497222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.817 [2024-12-10 22:53:41.497233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.817 [2024-12-10 22:53:41.497242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.817 [2024-12-10 22:53:41.501568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.817 [2024-12-10 22:53:41.501611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.817 [2024-12-10 22:53:41.501646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.817 [2024-12-10 22:53:41.501650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.075 [2024-12-10 22:53:41.643588] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.075 Malloc0 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.075 [2024-12-10 22:53:41.706538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.075 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.076 [ 00:22:34.076 { 00:22:34.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:34.076 "subtype": "Discovery", 00:22:34.076 "listen_addresses": [], 00:22:34.076 "allow_any_host": true, 00:22:34.076 "hosts": [] 00:22:34.076 }, 00:22:34.076 { 00:22:34.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.076 "subtype": "NVMe", 00:22:34.076 "listen_addresses": [ 00:22:34.076 { 00:22:34.076 "trtype": "TCP", 00:22:34.076 "adrfam": "IPv4", 00:22:34.076 "traddr": "10.0.0.2", 00:22:34.076 "trsvcid": "4420" 00:22:34.076 } 00:22:34.076 ], 00:22:34.076 "allow_any_host": true, 00:22:34.076 "hosts": [], 00:22:34.076 "serial_number": "SPDK00000000000001", 00:22:34.076 "model_number": "SPDK bdev Controller", 00:22:34.076 "max_namespaces": 2, 00:22:34.076 "min_cntlid": 1, 00:22:34.076 "max_cntlid": 65519, 00:22:34.076 "namespaces": [ 00:22:34.076 { 00:22:34.076 "nsid": 1, 00:22:34.076 "bdev_name": "Malloc0", 00:22:34.076 "name": "Malloc0", 00:22:34.076 "nguid": "DD185ECA8D8548569504642EF9F9D86A", 00:22:34.076 "uuid": "dd185eca-8d85-4856-9504-642ef9f9d86a" 00:22:34.076 } 00:22:34.076 ] 00:22:34.076 } 00:22:34.076 ] 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=122998 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:34.076 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:22:34.333 22:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:34.333 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.333 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.334 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:34.334 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:34.334 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.334 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.592 Malloc1 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.592 [ 00:22:34.592 { 00:22:34.592 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:34.592 "subtype": "Discovery", 00:22:34.592 "listen_addresses": [], 00:22:34.592 "allow_any_host": true, 00:22:34.592 "hosts": [] 00:22:34.592 }, 00:22:34.592 { 00:22:34.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.592 "subtype": "NVMe", 00:22:34.592 "listen_addresses": [ 00:22:34.592 { 00:22:34.592 "trtype": "TCP", 00:22:34.592 "adrfam": "IPv4", 00:22:34.592 "traddr": "10.0.0.2", 00:22:34.592 "trsvcid": "4420" 00:22:34.592 } 00:22:34.592 ], 00:22:34.592 "allow_any_host": true, 00:22:34.592 "hosts": [], 00:22:34.592 "serial_number": "SPDK00000000000001", 00:22:34.592 "model_number": "SPDK bdev Controller", 00:22:34.592 "max_namespaces": 2, 00:22:34.592 "min_cntlid": 1, 00:22:34.592 "max_cntlid": 65519, 00:22:34.592 "namespaces": [ 00:22:34.592 { 00:22:34.592 "nsid": 1, 00:22:34.592 "bdev_name": "Malloc0", 00:22:34.592 "name": "Malloc0", 00:22:34.592 "nguid": "DD185ECA8D8548569504642EF9F9D86A", 00:22:34.592 "uuid": "dd185eca-8d85-4856-9504-642ef9f9d86a" 00:22:34.592 }, 00:22:34.592 { 00:22:34.592 "nsid": 2, 00:22:34.592 "bdev_name": "Malloc1", 00:22:34.592 "name": "Malloc1", 00:22:34.592 "nguid": "6273C4FA7BDA4581AF00223F16190F35", 00:22:34.592 "uuid": "6273c4fa-7bda-4581-af00-223f16190f35" 00:22:34.592 } 00:22:34.592 ] 00:22:34.592 } 00:22:34.592 ] 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 122998 00:22:34.592 Asynchronous Event Request test 00:22:34.592 Attaching to 10.0.0.2 00:22:34.592 Attached to 10.0.0.2 00:22:34.592 Registering asynchronous event callbacks... 00:22:34.592 Starting namespace attribute notice tests for all controllers... 00:22:34.592 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:34.592 aer_cb - Changed Namespace 00:22:34.592 Cleaning up... 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.592 rmmod nvme_tcp 00:22:34.592 rmmod nvme_fabrics 00:22:34.592 rmmod nvme_keyring 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 122854 ']' 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 122854 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 122854 ']' 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 122854 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122854 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122854' 00:22:34.592 killing process with pid 122854 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 122854 00:22:34.592 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 122854 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.858 22:53:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.402 22:53:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.402 00:22:37.402 real 0m5.764s 00:22:37.402 user 0m4.812s 00:22:37.402 sys 0m2.051s 00:22:37.402 22:53:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.402 22:53:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:37.402 ************************************ 00:22:37.402 END TEST nvmf_aer 00:22:37.403 ************************************ 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.403 ************************************ 00:22:37.403 START TEST nvmf_async_init 00:22:37.403 ************************************ 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:37.403 * Looking for test storage... 00:22:37.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.403 --rc genhtml_branch_coverage=1 00:22:37.403 --rc genhtml_function_coverage=1 00:22:37.403 --rc genhtml_legend=1 00:22:37.403 --rc geninfo_all_blocks=1 00:22:37.403 --rc geninfo_unexecuted_blocks=1 00:22:37.403 00:22:37.403 ' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.403 --rc genhtml_branch_coverage=1 00:22:37.403 --rc genhtml_function_coverage=1 00:22:37.403 --rc genhtml_legend=1 00:22:37.403 --rc geninfo_all_blocks=1 00:22:37.403 --rc geninfo_unexecuted_blocks=1 00:22:37.403 00:22:37.403 ' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.403 --rc genhtml_branch_coverage=1 00:22:37.403 --rc genhtml_function_coverage=1 00:22:37.403 --rc genhtml_legend=1 00:22:37.403 --rc geninfo_all_blocks=1 00:22:37.403 --rc geninfo_unexecuted_blocks=1 00:22:37.403 00:22:37.403 ' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.403 --rc genhtml_branch_coverage=1 00:22:37.403 --rc genhtml_function_coverage=1 00:22:37.403 --rc genhtml_legend=1 00:22:37.403 --rc geninfo_all_blocks=1 00:22:37.403 --rc geninfo_unexecuted_blocks=1 00:22:37.403 00:22:37.403 ' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.403 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4b0bf9c909974afca71ae8a187eaf788 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.404 22:53:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.310 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:22:39.311 00:22:39.311 --- 10.0.0.2 ping statistics --- 00:22:39.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.311 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:22:39.311 00:22:39.311 --- 10.0.0.1 ping statistics --- 00:22:39.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.311 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.311 22:53:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=124952 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 124952 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 124952 ']' 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.311 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.569 [2024-12-10 22:53:47.058787] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:39.569 [2024-12-10 22:53:47.058871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.569 [2024-12-10 22:53:47.134388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.569 [2024-12-10 22:53:47.195149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.569 [2024-12-10 22:53:47.195200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.569 [2024-12-10 22:53:47.195215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.569 [2024-12-10 22:53:47.195228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.569 [2024-12-10 22:53:47.195238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.569 [2024-12-10 22:53:47.195872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.828 [2024-12-10 22:53:47.329483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.828 null0 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4b0bf9c909974afca71ae8a187eaf788 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.828 [2024-12-10 22:53:47.369764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.828 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.089 nvme0n1 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.089 [ 00:22:40.089 { 00:22:40.089 "name": "nvme0n1", 00:22:40.089 "aliases": [ 00:22:40.089 "4b0bf9c9-0997-4afc-a71a-e8a187eaf788" 00:22:40.089 ], 00:22:40.089 "product_name": "NVMe disk", 00:22:40.089 "block_size": 512, 00:22:40.089 "num_blocks": 2097152, 00:22:40.089 "uuid": "4b0bf9c9-0997-4afc-a71a-e8a187eaf788", 00:22:40.089 "numa_id": 0, 00:22:40.089 "assigned_rate_limits": { 00:22:40.089 "rw_ios_per_sec": 0, 00:22:40.089 "rw_mbytes_per_sec": 0, 00:22:40.089 "r_mbytes_per_sec": 0, 00:22:40.089 "w_mbytes_per_sec": 0 00:22:40.089 }, 00:22:40.089 "claimed": false, 00:22:40.089 "zoned": false, 00:22:40.089 "supported_io_types": { 00:22:40.089 "read": true, 00:22:40.089 "write": true, 00:22:40.089 "unmap": false, 00:22:40.089 "flush": true, 00:22:40.089 "reset": true, 00:22:40.089 "nvme_admin": true, 00:22:40.089 "nvme_io": true, 00:22:40.089 "nvme_io_md": false, 00:22:40.089 "write_zeroes": true, 00:22:40.089 "zcopy": false, 00:22:40.089 "get_zone_info": false, 00:22:40.089 "zone_management": false, 00:22:40.089 "zone_append": false, 00:22:40.089 "compare": true, 00:22:40.089 "compare_and_write": true, 00:22:40.089 "abort": true, 00:22:40.089 "seek_hole": false, 00:22:40.089 "seek_data": false, 00:22:40.089 "copy": true, 00:22:40.089 "nvme_iov_md": false 00:22:40.089 }, 00:22:40.089 "memory_domains": [ 00:22:40.089 { 00:22:40.089 "dma_device_id": "system", 00:22:40.089 "dma_device_type": 1 00:22:40.089 } 00:22:40.089 ], 00:22:40.089 "driver_specific": { 00:22:40.089 "nvme": [ 00:22:40.089 { 00:22:40.089 "trid": { 00:22:40.089 "trtype": "TCP", 00:22:40.089 "adrfam": "IPv4", 00:22:40.089 "traddr": "10.0.0.2", 00:22:40.089 "trsvcid": "4420", 00:22:40.089 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:40.089 }, 00:22:40.089 "ctrlr_data": { 00:22:40.089 "cntlid": 1, 00:22:40.089 "vendor_id": "0x8086", 00:22:40.089 "model_number": "SPDK bdev Controller", 00:22:40.089 "serial_number": "00000000000000000000", 00:22:40.089 "firmware_revision": "25.01", 00:22:40.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.089 "oacs": { 00:22:40.089 "security": 0, 00:22:40.089 "format": 0, 00:22:40.089 "firmware": 0, 00:22:40.089 "ns_manage": 0 00:22:40.089 }, 00:22:40.089 "multi_ctrlr": true, 00:22:40.089 "ana_reporting": false 00:22:40.089 }, 00:22:40.089 "vs": { 00:22:40.089 "nvme_version": "1.3" 00:22:40.089 }, 00:22:40.089 "ns_data": { 00:22:40.089 "id": 1, 00:22:40.089 "can_share": true 00:22:40.089 } 00:22:40.089 } 00:22:40.089 ], 00:22:40.089 "mp_policy": "active_passive" 00:22:40.089 } 00:22:40.089 } 00:22:40.089 ] 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.089 [2024-12-10 22:53:47.618921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.089 [2024-12-10 22:53:47.619008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444700 (9): Bad file descriptor 00:22:40.089 [2024-12-10 22:53:47.750661] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.089 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.089 [ 00:22:40.089 { 00:22:40.089 "name": "nvme0n1", 00:22:40.089 "aliases": [ 00:22:40.089 "4b0bf9c9-0997-4afc-a71a-e8a187eaf788" 00:22:40.089 ], 00:22:40.089 "product_name": "NVMe disk", 00:22:40.089 "block_size": 512, 00:22:40.089 "num_blocks": 2097152, 00:22:40.089 "uuid": "4b0bf9c9-0997-4afc-a71a-e8a187eaf788", 00:22:40.089 "numa_id": 0, 00:22:40.089 "assigned_rate_limits": { 00:22:40.089 "rw_ios_per_sec": 0, 00:22:40.089 "rw_mbytes_per_sec": 0, 00:22:40.089 "r_mbytes_per_sec": 0, 00:22:40.089 "w_mbytes_per_sec": 0 00:22:40.089 }, 00:22:40.089 "claimed": false, 00:22:40.089 "zoned": false, 00:22:40.089 "supported_io_types": { 00:22:40.089 "read": true, 00:22:40.089 "write": true, 00:22:40.089 "unmap": false, 00:22:40.089 "flush": true, 00:22:40.089 "reset": true, 00:22:40.089 "nvme_admin": true, 00:22:40.089 "nvme_io": true, 00:22:40.089 "nvme_io_md": false, 00:22:40.089 "write_zeroes": true, 00:22:40.089 "zcopy": false, 00:22:40.089 "get_zone_info": false, 00:22:40.089 "zone_management": false, 00:22:40.089 "zone_append": false, 00:22:40.089 "compare": true, 00:22:40.089 "compare_and_write": true, 00:22:40.089 "abort": true, 00:22:40.089 "seek_hole": false, 00:22:40.089 "seek_data": false, 00:22:40.089 "copy": true, 00:22:40.089 "nvme_iov_md": false 00:22:40.089 }, 00:22:40.089 "memory_domains": [ 00:22:40.089 { 00:22:40.089 "dma_device_id": "system", 00:22:40.089 "dma_device_type": 1 00:22:40.089 } 00:22:40.089 ], 00:22:40.089 "driver_specific": { 00:22:40.089 "nvme": [ 00:22:40.089 { 00:22:40.089 "trid": { 00:22:40.089 "trtype": "TCP", 00:22:40.089 "adrfam": "IPv4", 00:22:40.089 "traddr": "10.0.0.2", 00:22:40.089 "trsvcid": "4420", 00:22:40.089 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:40.089 }, 00:22:40.089 "ctrlr_data": { 00:22:40.089 "cntlid": 2, 00:22:40.089 "vendor_id": "0x8086", 00:22:40.089 "model_number": "SPDK bdev Controller", 00:22:40.089 "serial_number": "00000000000000000000", 00:22:40.089 "firmware_revision": "25.01", 00:22:40.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.089 "oacs": { 00:22:40.089 "security": 0, 00:22:40.089 "format": 0, 00:22:40.089 "firmware": 0, 00:22:40.089 "ns_manage": 0 00:22:40.089 }, 00:22:40.090 "multi_ctrlr": true, 00:22:40.090 "ana_reporting": false 00:22:40.090 }, 00:22:40.090 "vs": { 00:22:40.090 "nvme_version": "1.3" 00:22:40.090 }, 00:22:40.090 "ns_data": { 00:22:40.090 "id": 1, 00:22:40.090 "can_share": true 00:22:40.090 } 00:22:40.090 } 00:22:40.090 ], 00:22:40.090 "mp_policy": "active_passive" 00:22:40.090 } 00:22:40.090 } 00:22:40.090 ] 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PCDV6V3QkA 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PCDV6V3QkA 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.PCDV6V3QkA 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.090 [2024-12-10 22:53:47.803489] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.090 [2024-12-10 22:53:47.803634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.090 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.351 [2024-12-10 22:53:47.819564] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.351 nvme0n1 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.351 [ 00:22:40.351 { 00:22:40.351 "name": "nvme0n1", 00:22:40.351 "aliases": [ 00:22:40.351 "4b0bf9c9-0997-4afc-a71a-e8a187eaf788" 00:22:40.351 ], 00:22:40.351 "product_name": "NVMe disk", 00:22:40.351 "block_size": 512, 00:22:40.351 "num_blocks": 2097152, 00:22:40.351 "uuid": "4b0bf9c9-0997-4afc-a71a-e8a187eaf788", 00:22:40.351 "numa_id": 0, 00:22:40.351 "assigned_rate_limits": { 00:22:40.351 "rw_ios_per_sec": 0, 00:22:40.351 "rw_mbytes_per_sec": 0, 00:22:40.351 "r_mbytes_per_sec": 0, 00:22:40.351 "w_mbytes_per_sec": 0 00:22:40.351 }, 00:22:40.351 "claimed": false, 00:22:40.351 "zoned": false, 00:22:40.351 "supported_io_types": { 00:22:40.351 "read": true, 00:22:40.351 "write": true, 00:22:40.351 "unmap": false, 00:22:40.351 "flush": true, 00:22:40.351 "reset": true, 00:22:40.351 "nvme_admin": true, 00:22:40.351 "nvme_io": true, 00:22:40.351 "nvme_io_md": false, 00:22:40.351 "write_zeroes": true, 00:22:40.351 "zcopy": false, 00:22:40.351 "get_zone_info": false, 00:22:40.351 "zone_management": false, 00:22:40.351 "zone_append": false, 00:22:40.351 "compare": true, 00:22:40.351 "compare_and_write": true, 00:22:40.351 "abort": true, 00:22:40.351 "seek_hole": false, 00:22:40.351 "seek_data": false, 00:22:40.351 "copy": true, 00:22:40.351 "nvme_iov_md": false 00:22:40.351 }, 00:22:40.351 "memory_domains": [ 00:22:40.351 { 00:22:40.351 "dma_device_id": "system", 00:22:40.351 "dma_device_type": 1 00:22:40.351 } 00:22:40.351 ], 00:22:40.351 "driver_specific": { 00:22:40.351 "nvme": [ 00:22:40.351 { 00:22:40.351 "trid": { 00:22:40.351 "trtype": "TCP", 00:22:40.351 "adrfam": "IPv4", 00:22:40.351 "traddr": "10.0.0.2", 00:22:40.351 "trsvcid": "4421", 00:22:40.351 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:40.351 }, 00:22:40.351 "ctrlr_data": { 00:22:40.351 "cntlid": 3, 00:22:40.351 "vendor_id": "0x8086", 00:22:40.351 "model_number": "SPDK bdev Controller", 00:22:40.351 "serial_number": "00000000000000000000", 00:22:40.351 "firmware_revision": "25.01", 00:22:40.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.351 "oacs": { 00:22:40.351 "security": 0, 00:22:40.351 "format": 0, 00:22:40.351 "firmware": 0, 00:22:40.351 "ns_manage": 0 00:22:40.351 }, 00:22:40.351 "multi_ctrlr": true, 00:22:40.351 "ana_reporting": false 00:22:40.351 }, 00:22:40.351 "vs": { 00:22:40.351 "nvme_version": "1.3" 00:22:40.351 }, 00:22:40.351 "ns_data": { 00:22:40.351 "id": 1, 00:22:40.351 "can_share": true 00:22:40.351 } 00:22:40.351 } 00:22:40.351 ], 00:22:40.351 "mp_policy": "active_passive" 00:22:40.351 } 00:22:40.351 } 00:22:40.351 ] 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.PCDV6V3QkA 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.351 rmmod nvme_tcp 00:22:40.351 rmmod nvme_fabrics 00:22:40.351 rmmod nvme_keyring 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 124952 ']' 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 124952 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 124952 ']' 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 124952 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.351 22:53:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124952 00:22:40.351 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.351 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.351 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124952' 00:22:40.351 killing process with pid 124952 00:22:40.351 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 124952 00:22:40.351 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 124952 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.611 22:53:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.155 00:22:43.155 real 0m5.667s 00:22:43.155 user 0m2.167s 00:22:43.155 sys 0m1.906s 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.155 ************************************ 00:22:43.155 END TEST nvmf_async_init 00:22:43.155 ************************************ 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.155 ************************************ 00:22:43.155 START TEST dma 00:22:43.155 ************************************ 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:43.155 * Looking for test storage... 00:22:43.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.155 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.155 --rc genhtml_branch_coverage=1 00:22:43.155 --rc genhtml_function_coverage=1 00:22:43.155 --rc genhtml_legend=1 00:22:43.155 --rc geninfo_all_blocks=1 00:22:43.155 --rc geninfo_unexecuted_blocks=1 00:22:43.155 00:22:43.155 ' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.156 --rc genhtml_branch_coverage=1 00:22:43.156 --rc genhtml_function_coverage=1 00:22:43.156 --rc genhtml_legend=1 00:22:43.156 --rc geninfo_all_blocks=1 00:22:43.156 --rc geninfo_unexecuted_blocks=1 00:22:43.156 00:22:43.156 ' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.156 --rc genhtml_branch_coverage=1 00:22:43.156 --rc genhtml_function_coverage=1 00:22:43.156 --rc genhtml_legend=1 00:22:43.156 --rc geninfo_all_blocks=1 00:22:43.156 --rc geninfo_unexecuted_blocks=1 00:22:43.156 00:22:43.156 ' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.156 --rc genhtml_branch_coverage=1 00:22:43.156 --rc genhtml_function_coverage=1 00:22:43.156 --rc genhtml_legend=1 00:22:43.156 --rc geninfo_all_blocks=1 00:22:43.156 --rc geninfo_unexecuted_blocks=1 00:22:43.156 00:22:43.156 ' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:43.156 00:22:43.156 real 0m0.160s 00:22:43.156 user 0m0.112s 00:22:43.156 sys 0m0.056s 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:43.156 ************************************ 00:22:43.156 END TEST dma 00:22:43.156 ************************************ 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.156 ************************************ 00:22:43.156 START TEST nvmf_identify 00:22:43.156 ************************************ 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:43.156 * Looking for test storage... 00:22:43.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.156 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.156 --rc genhtml_branch_coverage=1 00:22:43.156 --rc genhtml_function_coverage=1 00:22:43.156 --rc genhtml_legend=1 00:22:43.156 --rc geninfo_all_blocks=1 00:22:43.156 --rc geninfo_unexecuted_blocks=1 00:22:43.156 00:22:43.156 ' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.157 --rc genhtml_branch_coverage=1 00:22:43.157 --rc genhtml_function_coverage=1 00:22:43.157 --rc genhtml_legend=1 00:22:43.157 --rc geninfo_all_blocks=1 00:22:43.157 --rc geninfo_unexecuted_blocks=1 00:22:43.157 00:22:43.157 ' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.157 --rc genhtml_branch_coverage=1 00:22:43.157 --rc genhtml_function_coverage=1 00:22:43.157 --rc genhtml_legend=1 00:22:43.157 --rc geninfo_all_blocks=1 00:22:43.157 --rc geninfo_unexecuted_blocks=1 00:22:43.157 00:22:43.157 ' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.157 --rc genhtml_branch_coverage=1 00:22:43.157 --rc genhtml_function_coverage=1 00:22:43.157 --rc genhtml_legend=1 00:22:43.157 --rc geninfo_all_blocks=1 00:22:43.157 --rc geninfo_unexecuted_blocks=1 00:22:43.157 00:22:43.157 ' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.157 22:53:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:45.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:45.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:45.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:45.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.065 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:22:45.066 00:22:45.066 --- 10.0.0.2 ping statistics --- 00:22:45.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.066 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:22:45.066 00:22:45.066 --- 10.0.0.1 ping statistics --- 00:22:45.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.066 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=127086 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 127086 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 127086 ']' 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.066 22:53:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.327 [2024-12-10 22:53:52.799586] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:45.327 [2024-12-10 22:53:52.799663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.327 [2024-12-10 22:53:52.875809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.327 [2024-12-10 22:53:52.934877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.327 [2024-12-10 22:53:52.934927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.327 [2024-12-10 22:53:52.934955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.327 [2024-12-10 22:53:52.934966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.327 [2024-12-10 22:53:52.934976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.327 [2024-12-10 22:53:52.936436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.327 [2024-12-10 22:53:52.936498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.327 [2024-12-10 22:53:52.936573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.327 [2024-12-10 22:53:52.936576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.586 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.586 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:45.586 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 [2024-12-10 22:53:53.061464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 Malloc0 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 [2024-12-10 22:53:53.148092] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 [ 00:22:45.587 { 00:22:45.587 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:45.587 "subtype": "Discovery", 00:22:45.587 "listen_addresses": [ 00:22:45.587 { 00:22:45.587 "trtype": "TCP", 00:22:45.587 "adrfam": "IPv4", 00:22:45.587 "traddr": "10.0.0.2", 00:22:45.587 "trsvcid": "4420" 00:22:45.587 } 00:22:45.587 ], 00:22:45.587 "allow_any_host": true, 00:22:45.587 "hosts": [] 00:22:45.587 }, 00:22:45.587 { 00:22:45.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.587 "subtype": "NVMe", 00:22:45.587 "listen_addresses": [ 00:22:45.587 { 00:22:45.587 "trtype": "TCP", 00:22:45.587 "adrfam": "IPv4", 00:22:45.587 "traddr": "10.0.0.2", 00:22:45.587 "trsvcid": "4420" 00:22:45.587 } 00:22:45.587 ], 00:22:45.587 "allow_any_host": true, 00:22:45.587 "hosts": [], 00:22:45.587 "serial_number": "SPDK00000000000001", 00:22:45.587 "model_number": "SPDK bdev Controller", 00:22:45.587 "max_namespaces": 32, 00:22:45.587 "min_cntlid": 1, 00:22:45.587 "max_cntlid": 65519, 00:22:45.587 "namespaces": [ 00:22:45.587 { 00:22:45.587 "nsid": 1, 00:22:45.587 "bdev_name": "Malloc0", 00:22:45.587 "name": "Malloc0", 00:22:45.587 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:45.587 "eui64": "ABCDEF0123456789", 00:22:45.587 "uuid": "02399236-490f-4a27-8129-a9534be1deba" 00:22:45.587 } 00:22:45.587 ] 00:22:45.587 } 00:22:45.587 ] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.587 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:45.587 [2024-12-10 22:53:53.191428] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:45.587 [2024-12-10 22:53:53.191472] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127230 ] 00:22:45.587 [2024-12-10 22:53:53.241155] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:45.587 [2024-12-10 22:53:53.241224] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:45.587 [2024-12-10 22:53:53.241235] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:45.587 [2024-12-10 22:53:53.241251] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:45.587 [2024-12-10 22:53:53.241266] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:45.587 [2024-12-10 22:53:53.249018] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:45.587 [2024-12-10 22:53:53.249095] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c35690 0 00:22:45.587 [2024-12-10 22:53:53.249308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:45.587 [2024-12-10 22:53:53.249330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:45.587 [2024-12-10 22:53:53.249339] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:45.587 [2024-12-10 22:53:53.249345] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:45.587 [2024-12-10 22:53:53.249393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.249408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.249416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.587 [2024-12-10 22:53:53.249437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:45.587 [2024-12-10 22:53:53.249464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.587 [2024-12-10 22:53:53.255561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.587 [2024-12-10 22:53:53.255580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.587 [2024-12-10 22:53:53.255588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.255596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.587 [2024-12-10 22:53:53.255619] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:45.587 [2024-12-10 22:53:53.255633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:45.587 [2024-12-10 22:53:53.255643] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:45.587 [2024-12-10 22:53:53.255665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.255674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.255680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.587 [2024-12-10 22:53:53.255692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.587 [2024-12-10 22:53:53.255715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.587 [2024-12-10 22:53:53.255824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.587 [2024-12-10 22:53:53.255837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.587 [2024-12-10 22:53:53.255844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.255851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.587 [2024-12-10 22:53:53.255863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:45.587 [2024-12-10 22:53:53.255876] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:45.587 [2024-12-10 22:53:53.255888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.255896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.255902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.587 [2024-12-10 22:53:53.255913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.587 [2024-12-10 22:53:53.255934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.587 [2024-12-10 22:53:53.256012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.587 [2024-12-10 22:53:53.256025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.587 [2024-12-10 22:53:53.256033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.256039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.587 [2024-12-10 22:53:53.256049] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:45.587 [2024-12-10 22:53:53.256063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:45.587 [2024-12-10 22:53:53.256076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.256083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.256090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.587 [2024-12-10 22:53:53.256100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.587 [2024-12-10 22:53:53.256121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.587 [2024-12-10 22:53:53.256193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.587 [2024-12-10 22:53:53.256206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.587 [2024-12-10 22:53:53.256214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.587 [2024-12-10 22:53:53.256220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.588 [2024-12-10 22:53:53.256230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:45.588 [2024-12-10 22:53:53.256247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.256273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.588 [2024-12-10 22:53:53.256293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.588 [2024-12-10 22:53:53.256378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.588 [2024-12-10 22:53:53.256392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.588 [2024-12-10 22:53:53.256399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.588 [2024-12-10 22:53:53.256415] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:45.588 [2024-12-10 22:53:53.256424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:45.588 [2024-12-10 22:53:53.256437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:45.588 [2024-12-10 22:53:53.256554] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:45.588 [2024-12-10 22:53:53.256564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:45.588 [2024-12-10 22:53:53.256580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.256604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.588 [2024-12-10 22:53:53.256641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.588 [2024-12-10 22:53:53.256732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.588 [2024-12-10 22:53:53.256747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.588 [2024-12-10 22:53:53.256754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.588 [2024-12-10 22:53:53.256769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:45.588 [2024-12-10 22:53:53.256786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.256813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.588 [2024-12-10 22:53:53.256839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.588 [2024-12-10 22:53:53.256910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.588 [2024-12-10 22:53:53.256922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.588 [2024-12-10 22:53:53.256929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.256936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.588 [2024-12-10 22:53:53.256945] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:45.588 [2024-12-10 22:53:53.256954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:45.588 [2024-12-10 22:53:53.256967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:45.588 [2024-12-10 22:53:53.256982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:45.588 [2024-12-10 22:53:53.257000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.257008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.257019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.588 [2024-12-10 22:53:53.257039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.588 [2024-12-10 22:53:53.257164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.588 [2024-12-10 22:53:53.257177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.588 [2024-12-10 22:53:53.257184] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.257191] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c35690): datao=0, datal=4096, cccid=0 00:22:45.588 [2024-12-10 22:53:53.257199] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c97100) on tqpair(0x1c35690): expected_datao=0, payload_size=4096 00:22:45.588 [2024-12-10 22:53:53.257207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.257225] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.257236] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.588 [2024-12-10 22:53:53.297645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.588 [2024-12-10 22:53:53.297653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.588 [2024-12-10 22:53:53.297673] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:45.588 [2024-12-10 22:53:53.297683] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:45.588 [2024-12-10 22:53:53.297691] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:45.588 [2024-12-10 22:53:53.297700] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:45.588 [2024-12-10 22:53:53.297709] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:45.588 [2024-12-10 22:53:53.297717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:45.588 [2024-12-10 22:53:53.297741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:45.588 [2024-12-10 22:53:53.297760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.297788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.588 [2024-12-10 22:53:53.297811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.588 [2024-12-10 22:53:53.297895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.588 [2024-12-10 22:53:53.297909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.588 [2024-12-10 22:53:53.297916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.588 [2024-12-10 22:53:53.297936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.297960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.588 [2024-12-10 22:53:53.297971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.297985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.297994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.588 [2024-12-10 22:53:53.298004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.298011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.298018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.298027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.588 [2024-12-10 22:53:53.298036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.298044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.298050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.298059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.588 [2024-12-10 22:53:53.298068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:45.588 [2024-12-10 22:53:53.298091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:45.588 [2024-12-10 22:53:53.298105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.298113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c35690) 00:22:45.588 [2024-12-10 22:53:53.298123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.588 [2024-12-10 22:53:53.298146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97100, cid 0, qid 0 00:22:45.588 [2024-12-10 22:53:53.298158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97280, cid 1, qid 0 00:22:45.588 [2024-12-10 22:53:53.298170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97400, cid 2, qid 0 00:22:45.588 [2024-12-10 22:53:53.298179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97580, cid 3, qid 0 00:22:45.588 [2024-12-10 22:53:53.298187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97700, cid 4, qid 0 00:22:45.588 [2024-12-10 22:53:53.298278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.588 [2024-12-10 22:53:53.298291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.588 [2024-12-10 22:53:53.298298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.588 [2024-12-10 22:53:53.298305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97700) on tqpair=0x1c35690 00:22:45.589 [2024-12-10 22:53:53.298314] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:45.589 [2024-12-10 22:53:53.298323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:45.589 [2024-12-10 22:53:53.298341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c35690) 00:22:45.589 [2024-12-10 22:53:53.298363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.589 [2024-12-10 22:53:53.298384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97700, cid 4, qid 0 00:22:45.589 [2024-12-10 22:53:53.298476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.589 [2024-12-10 22:53:53.298490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.589 [2024-12-10 22:53:53.298498] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298504] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c35690): datao=0, datal=4096, cccid=4 00:22:45.589 [2024-12-10 22:53:53.298512] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c97700) on tqpair(0x1c35690): expected_datao=0, payload_size=4096 00:22:45.589 [2024-12-10 22:53:53.298520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298530] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298538] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.589 [2024-12-10 22:53:53.298571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.589 [2024-12-10 22:53:53.298578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97700) on tqpair=0x1c35690 00:22:45.589 [2024-12-10 22:53:53.298605] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:45.589 [2024-12-10 22:53:53.298646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c35690) 00:22:45.589 [2024-12-10 22:53:53.298669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.589 [2024-12-10 22:53:53.298681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c35690) 00:22:45.589 [2024-12-10 22:53:53.298704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.589 [2024-12-10 22:53:53.298731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97700, cid 4, qid 0 00:22:45.589 [2024-12-10 22:53:53.298743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97880, cid 5, qid 0 00:22:45.589 [2024-12-10 22:53:53.298867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.589 [2024-12-10 22:53:53.298881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.589 [2024-12-10 22:53:53.298888] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298894] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c35690): datao=0, datal=1024, cccid=4 00:22:45.589 [2024-12-10 22:53:53.298902] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c97700) on tqpair(0x1c35690): expected_datao=0, payload_size=1024 00:22:45.589 [2024-12-10 22:53:53.298910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298920] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298928] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.589 [2024-12-10 22:53:53.298945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.589 [2024-12-10 22:53:53.298952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.589 [2024-12-10 22:53:53.298959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97880) on tqpair=0x1c35690 00:22:45.855 [2024-12-10 22:53:53.341941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.855 [2024-12-10 22:53:53.341962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.855 [2024-12-10 22:53:53.341970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.341977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97700) on tqpair=0x1c35690 00:22:45.855 [2024-12-10 22:53:53.341995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c35690) 00:22:45.855 [2024-12-10 22:53:53.342015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.855 [2024-12-10 22:53:53.342060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97700, cid 4, qid 0 00:22:45.855 [2024-12-10 22:53:53.342198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.855 [2024-12-10 22:53:53.342211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.855 [2024-12-10 22:53:53.342218] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342224] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c35690): datao=0, datal=3072, cccid=4 00:22:45.855 [2024-12-10 22:53:53.342232] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c97700) on tqpair(0x1c35690): expected_datao=0, payload_size=3072 00:22:45.855 [2024-12-10 22:53:53.342239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342250] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.855 [2024-12-10 22:53:53.342279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.855 [2024-12-10 22:53:53.342286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97700) on tqpair=0x1c35690 00:22:45.855 [2024-12-10 22:53:53.342308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c35690) 00:22:45.855 [2024-12-10 22:53:53.342328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.855 [2024-12-10 22:53:53.342356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97700, cid 4, qid 0 00:22:45.855 [2024-12-10 22:53:53.342448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.855 [2024-12-10 22:53:53.342461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.855 [2024-12-10 22:53:53.342468] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342474] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c35690): datao=0, datal=8, cccid=4 00:22:45.855 [2024-12-10 22:53:53.342482] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c97700) on tqpair(0x1c35690): expected_datao=0, payload_size=8 00:22:45.855 [2024-12-10 22:53:53.342489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342499] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.342507] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.382631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.855 [2024-12-10 22:53:53.382651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.855 [2024-12-10 22:53:53.382659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.855 [2024-12-10 22:53:53.382666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97700) on tqpair=0x1c35690 00:22:45.855 ===================================================== 00:22:45.855 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:45.855 ===================================================== 00:22:45.855 Controller Capabilities/Features 00:22:45.855 ================================ 00:22:45.855 Vendor ID: 0000 00:22:45.855 Subsystem Vendor ID: 0000 00:22:45.855 Serial Number: .................... 00:22:45.855 Model Number: ........................................ 00:22:45.855 Firmware Version: 25.01 00:22:45.855 Recommended Arb Burst: 0 00:22:45.855 IEEE OUI Identifier: 00 00 00 00:22:45.855 Multi-path I/O 00:22:45.855 May have multiple subsystem ports: No 00:22:45.855 May have multiple controllers: No 00:22:45.855 Associated with SR-IOV VF: No 00:22:45.855 Max Data Transfer Size: 131072 00:22:45.855 Max Number of Namespaces: 0 00:22:45.855 Max Number of I/O Queues: 1024 00:22:45.855 NVMe Specification Version (VS): 1.3 00:22:45.855 NVMe Specification Version (Identify): 1.3 00:22:45.855 Maximum Queue Entries: 128 00:22:45.855 Contiguous Queues Required: Yes 00:22:45.855 Arbitration Mechanisms Supported 00:22:45.855 Weighted Round Robin: Not Supported 00:22:45.855 Vendor Specific: Not Supported 00:22:45.855 Reset Timeout: 15000 ms 00:22:45.855 Doorbell Stride: 4 bytes 00:22:45.855 NVM Subsystem Reset: Not Supported 00:22:45.855 Command Sets Supported 00:22:45.855 NVM Command Set: Supported 00:22:45.855 Boot Partition: Not Supported 00:22:45.855 Memory Page Size Minimum: 4096 bytes 00:22:45.855 Memory Page Size Maximum: 4096 bytes 00:22:45.855 Persistent Memory Region: Not Supported 00:22:45.855 Optional Asynchronous Events Supported 00:22:45.855 Namespace Attribute Notices: Not Supported 00:22:45.855 Firmware Activation Notices: Not Supported 00:22:45.855 ANA Change Notices: Not Supported 00:22:45.855 PLE Aggregate Log Change Notices: Not Supported 00:22:45.855 LBA Status Info Alert Notices: Not Supported 00:22:45.855 EGE Aggregate Log Change Notices: Not Supported 00:22:45.855 Normal NVM Subsystem Shutdown event: Not Supported 00:22:45.855 Zone Descriptor Change Notices: Not Supported 00:22:45.855 Discovery Log Change Notices: Supported 00:22:45.855 Controller Attributes 00:22:45.855 128-bit Host Identifier: Not Supported 00:22:45.855 Non-Operational Permissive Mode: Not Supported 00:22:45.855 NVM Sets: Not Supported 00:22:45.856 Read Recovery Levels: Not Supported 00:22:45.856 Endurance Groups: Not Supported 00:22:45.856 Predictable Latency Mode: Not Supported 00:22:45.856 Traffic Based Keep ALive: Not Supported 00:22:45.856 Namespace Granularity: Not Supported 00:22:45.856 SQ Associations: Not Supported 00:22:45.856 UUID List: Not Supported 00:22:45.856 Multi-Domain Subsystem: Not Supported 00:22:45.856 Fixed Capacity Management: Not Supported 00:22:45.856 Variable Capacity Management: Not Supported 00:22:45.856 Delete Endurance Group: Not Supported 00:22:45.856 Delete NVM Set: Not Supported 00:22:45.856 Extended LBA Formats Supported: Not Supported 00:22:45.856 Flexible Data Placement Supported: Not Supported 00:22:45.856 00:22:45.856 Controller Memory Buffer Support 00:22:45.856 ================================ 00:22:45.856 Supported: No 00:22:45.856 00:22:45.856 Persistent Memory Region Support 00:22:45.856 ================================ 00:22:45.856 Supported: No 00:22:45.856 00:22:45.856 Admin Command Set Attributes 00:22:45.856 ============================ 00:22:45.856 Security Send/Receive: Not Supported 00:22:45.856 Format NVM: Not Supported 00:22:45.856 Firmware Activate/Download: Not Supported 00:22:45.856 Namespace Management: Not Supported 00:22:45.856 Device Self-Test: Not Supported 00:22:45.856 Directives: Not Supported 00:22:45.856 NVMe-MI: Not Supported 00:22:45.856 Virtualization Management: Not Supported 00:22:45.856 Doorbell Buffer Config: Not Supported 00:22:45.856 Get LBA Status Capability: Not Supported 00:22:45.856 Command & Feature Lockdown Capability: Not Supported 00:22:45.856 Abort Command Limit: 1 00:22:45.856 Async Event Request Limit: 4 00:22:45.856 Number of Firmware Slots: N/A 00:22:45.856 Firmware Slot 1 Read-Only: N/A 00:22:45.856 Firmware Activation Without Reset: N/A 00:22:45.856 Multiple Update Detection Support: N/A 00:22:45.856 Firmware Update Granularity: No Information Provided 00:22:45.856 Per-Namespace SMART Log: No 00:22:45.856 Asymmetric Namespace Access Log Page: Not Supported 00:22:45.856 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:45.856 Command Effects Log Page: Not Supported 00:22:45.856 Get Log Page Extended Data: Supported 00:22:45.856 Telemetry Log Pages: Not Supported 00:22:45.856 Persistent Event Log Pages: Not Supported 00:22:45.856 Supported Log Pages Log Page: May Support 00:22:45.856 Commands Supported & Effects Log Page: Not Supported 00:22:45.856 Feature Identifiers & Effects Log Page:May Support 00:22:45.856 NVMe-MI Commands & Effects Log Page: May Support 00:22:45.856 Data Area 4 for Telemetry Log: Not Supported 00:22:45.856 Error Log Page Entries Supported: 128 00:22:45.856 Keep Alive: Not Supported 00:22:45.856 00:22:45.856 NVM Command Set Attributes 00:22:45.856 ========================== 00:22:45.856 Submission Queue Entry Size 00:22:45.856 Max: 1 00:22:45.856 Min: 1 00:22:45.856 Completion Queue Entry Size 00:22:45.856 Max: 1 00:22:45.856 Min: 1 00:22:45.856 Number of Namespaces: 0 00:22:45.856 Compare Command: Not Supported 00:22:45.856 Write Uncorrectable Command: Not Supported 00:22:45.856 Dataset Management Command: Not Supported 00:22:45.856 Write Zeroes Command: Not Supported 00:22:45.856 Set Features Save Field: Not Supported 00:22:45.856 Reservations: Not Supported 00:22:45.856 Timestamp: Not Supported 00:22:45.856 Copy: Not Supported 00:22:45.856 Volatile Write Cache: Not Present 00:22:45.856 Atomic Write Unit (Normal): 1 00:22:45.856 Atomic Write Unit (PFail): 1 00:22:45.856 Atomic Compare & Write Unit: 1 00:22:45.856 Fused Compare & Write: Supported 00:22:45.856 Scatter-Gather List 00:22:45.856 SGL Command Set: Supported 00:22:45.856 SGL Keyed: Supported 00:22:45.856 SGL Bit Bucket Descriptor: Not Supported 00:22:45.856 SGL Metadata Pointer: Not Supported 00:22:45.856 Oversized SGL: Not Supported 00:22:45.856 SGL Metadata Address: Not Supported 00:22:45.856 SGL Offset: Supported 00:22:45.856 Transport SGL Data Block: Not Supported 00:22:45.856 Replay Protected Memory Block: Not Supported 00:22:45.856 00:22:45.856 Firmware Slot Information 00:22:45.856 ========================= 00:22:45.856 Active slot: 0 00:22:45.856 00:22:45.856 00:22:45.856 Error Log 00:22:45.856 ========= 00:22:45.856 00:22:45.856 Active Namespaces 00:22:45.856 ================= 00:22:45.856 Discovery Log Page 00:22:45.856 ================== 00:22:45.856 Generation Counter: 2 00:22:45.856 Number of Records: 2 00:22:45.856 Record Format: 0 00:22:45.856 00:22:45.856 Discovery Log Entry 0 00:22:45.856 ---------------------- 00:22:45.856 Transport Type: 3 (TCP) 00:22:45.856 Address Family: 1 (IPv4) 00:22:45.856 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:45.856 Entry Flags: 00:22:45.856 Duplicate Returned Information: 1 00:22:45.856 Explicit Persistent Connection Support for Discovery: 1 00:22:45.856 Transport Requirements: 00:22:45.856 Secure Channel: Not Required 00:22:45.856 Port ID: 0 (0x0000) 00:22:45.856 Controller ID: 65535 (0xffff) 00:22:45.856 Admin Max SQ Size: 128 00:22:45.856 Transport Service Identifier: 4420 00:22:45.856 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:45.856 Transport Address: 10.0.0.2 00:22:45.856 Discovery Log Entry 1 00:22:45.856 ---------------------- 00:22:45.856 Transport Type: 3 (TCP) 00:22:45.856 Address Family: 1 (IPv4) 00:22:45.856 Subsystem Type: 2 (NVM Subsystem) 00:22:45.856 Entry Flags: 00:22:45.856 Duplicate Returned Information: 0 00:22:45.856 Explicit Persistent Connection Support for Discovery: 0 00:22:45.856 Transport Requirements: 00:22:45.856 Secure Channel: Not Required 00:22:45.856 Port ID: 0 (0x0000) 00:22:45.856 Controller ID: 65535 (0xffff) 00:22:45.856 Admin Max SQ Size: 128 00:22:45.856 Transport Service Identifier: 4420 00:22:45.856 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:45.856 Transport Address: 10.0.0.2 [2024-12-10 22:53:53.382783] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:45.856 [2024-12-10 22:53:53.382806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97100) on tqpair=0x1c35690 00:22:45.856 [2024-12-10 22:53:53.382820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.856 [2024-12-10 22:53:53.382830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97280) on tqpair=0x1c35690 00:22:45.856 [2024-12-10 22:53:53.382838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.856 [2024-12-10 22:53:53.382847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97400) on tqpair=0x1c35690 00:22:45.856 [2024-12-10 22:53:53.382855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.856 [2024-12-10 22:53:53.382863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97580) on tqpair=0x1c35690 00:22:45.856 [2024-12-10 22:53:53.382871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.856 [2024-12-10 22:53:53.382885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.382893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.382900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c35690) 00:22:45.856 [2024-12-10 22:53:53.382912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.856 [2024-12-10 22:53:53.382937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97580, cid 3, qid 0 00:22:45.856 [2024-12-10 22:53:53.383007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.856 [2024-12-10 22:53:53.383020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.856 [2024-12-10 22:53:53.383027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.383034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97580) on tqpair=0x1c35690 00:22:45.856 [2024-12-10 22:53:53.383046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.383054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.383061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c35690) 00:22:45.856 [2024-12-10 22:53:53.383071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.856 [2024-12-10 22:53:53.383098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97580, cid 3, qid 0 00:22:45.856 [2024-12-10 22:53:53.383189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.856 [2024-12-10 22:53:53.383204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.856 [2024-12-10 22:53:53.383211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.383218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97580) on tqpair=0x1c35690 00:22:45.856 [2024-12-10 22:53:53.383227] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:45.856 [2024-12-10 22:53:53.383235] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:45.856 [2024-12-10 22:53:53.383252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.383261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.856 [2024-12-10 22:53:53.383268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c35690) 00:22:45.856 [2024-12-10 22:53:53.383279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.856 [2024-12-10 22:53:53.383300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97580, cid 3, qid 0 00:22:45.857 [2024-12-10 22:53:53.383379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.383394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.383401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.383407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97580) on tqpair=0x1c35690 00:22:45.857 [2024-12-10 22:53:53.383425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.383435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.383442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c35690) 00:22:45.857 [2024-12-10 22:53:53.383452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.857 [2024-12-10 22:53:53.383473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97580, cid 3, qid 0 00:22:45.857 [2024-12-10 22:53:53.387557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.387575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.387582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.387589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97580) on tqpair=0x1c35690 00:22:45.857 [2024-12-10 22:53:53.387607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.387617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.387624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c35690) 00:22:45.857 [2024-12-10 22:53:53.387634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.857 [2024-12-10 22:53:53.387656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c97580, cid 3, qid 0 00:22:45.857 [2024-12-10 22:53:53.387755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.387768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.387775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.387781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c97580) on tqpair=0x1c35690 00:22:45.857 [2024-12-10 22:53:53.387795] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:45.857 00:22:45.857 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:45.857 [2024-12-10 22:53:53.424117] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:45.857 [2024-12-10 22:53:53.424166] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127236 ] 00:22:45.857 [2024-12-10 22:53:53.476010] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:45.857 [2024-12-10 22:53:53.476057] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:45.857 [2024-12-10 22:53:53.476067] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:45.857 [2024-12-10 22:53:53.476080] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:45.857 [2024-12-10 22:53:53.476092] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:45.857 [2024-12-10 22:53:53.476562] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:45.857 [2024-12-10 22:53:53.476618] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2048690 0 00:22:45.857 [2024-12-10 22:53:53.482576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:45.857 [2024-12-10 22:53:53.482595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:45.857 [2024-12-10 22:53:53.482602] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:45.857 [2024-12-10 22:53:53.482608] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:45.857 [2024-12-10 22:53:53.482653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.482666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.482673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.857 [2024-12-10 22:53:53.482687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:45.857 [2024-12-10 22:53:53.482714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.857 [2024-12-10 22:53:53.490560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.490578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.490586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.490593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.857 [2024-12-10 22:53:53.490611] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:45.857 [2024-12-10 22:53:53.490623] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:45.857 [2024-12-10 22:53:53.490633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:45.857 [2024-12-10 22:53:53.490650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.490659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.490665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.857 [2024-12-10 22:53:53.490676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.857 [2024-12-10 22:53:53.490700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.857 [2024-12-10 22:53:53.490813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.490826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.490837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.490845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.857 [2024-12-10 22:53:53.490853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:45.857 [2024-12-10 22:53:53.490866] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:45.857 [2024-12-10 22:53:53.490879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.490886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.490893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.857 [2024-12-10 22:53:53.490903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.857 [2024-12-10 22:53:53.490925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.857 [2024-12-10 22:53:53.491008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.491020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.491027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.857 [2024-12-10 22:53:53.491042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:45.857 [2024-12-10 22:53:53.491056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:45.857 [2024-12-10 22:53:53.491068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.857 [2024-12-10 22:53:53.491092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.857 [2024-12-10 22:53:53.491113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.857 [2024-12-10 22:53:53.491186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.491198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.491205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.857 [2024-12-10 22:53:53.491220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:45.857 [2024-12-10 22:53:53.491237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.857 [2024-12-10 22:53:53.491263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.857 [2024-12-10 22:53:53.491283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.857 [2024-12-10 22:53:53.491357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.857 [2024-12-10 22:53:53.491369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.857 [2024-12-10 22:53:53.491376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.857 [2024-12-10 22:53:53.491394] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:45.857 [2024-12-10 22:53:53.491404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:45.857 [2024-12-10 22:53:53.491417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:45.857 [2024-12-10 22:53:53.491528] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:45.857 [2024-12-10 22:53:53.491536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:45.857 [2024-12-10 22:53:53.491556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.857 [2024-12-10 22:53:53.491572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.857 [2024-12-10 22:53:53.491582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.857 [2024-12-10 22:53:53.491604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.857 [2024-12-10 22:53:53.491714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.858 [2024-12-10 22:53:53.491726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.858 [2024-12-10 22:53:53.491733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.491740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.858 [2024-12-10 22:53:53.491748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:45.858 [2024-12-10 22:53:53.491764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.491773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.491780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.491790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.858 [2024-12-10 22:53:53.491811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.858 [2024-12-10 22:53:53.491888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.858 [2024-12-10 22:53:53.491902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.858 [2024-12-10 22:53:53.491909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.491916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.858 [2024-12-10 22:53:53.491923] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:45.858 [2024-12-10 22:53:53.491931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.491945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:45.858 [2024-12-10 22:53:53.491963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.491978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.491986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.491997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.858 [2024-12-10 22:53:53.492022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.858 [2024-12-10 22:53:53.492139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.858 [2024-12-10 22:53:53.492154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.858 [2024-12-10 22:53:53.492161] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492167] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=4096, cccid=0 00:22:45.858 [2024-12-10 22:53:53.492175] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aa100) on tqpair(0x2048690): expected_datao=0, payload_size=4096 00:22:45.858 [2024-12-10 22:53:53.492182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492192] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492200] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.858 [2024-12-10 22:53:53.492221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.858 [2024-12-10 22:53:53.492228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.858 [2024-12-10 22:53:53.492245] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:45.858 [2024-12-10 22:53:53.492254] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:45.858 [2024-12-10 22:53:53.492262] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:45.858 [2024-12-10 22:53:53.492268] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:45.858 [2024-12-10 22:53:53.492276] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:45.858 [2024-12-10 22:53:53.492284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.492303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.492318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.492344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.858 [2024-12-10 22:53:53.492366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.858 [2024-12-10 22:53:53.492452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.858 [2024-12-10 22:53:53.492466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.858 [2024-12-10 22:53:53.492473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.858 [2024-12-10 22:53:53.492489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.492513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.858 [2024-12-10 22:53:53.492523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.492562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.858 [2024-12-10 22:53:53.492573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.492607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.858 [2024-12-10 22:53:53.492617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.492639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.858 [2024-12-10 22:53:53.492648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.492670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.492684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.492701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.858 [2024-12-10 22:53:53.492724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa100, cid 0, qid 0 00:22:45.858 [2024-12-10 22:53:53.492736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa280, cid 1, qid 0 00:22:45.858 [2024-12-10 22:53:53.492743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa400, cid 2, qid 0 00:22:45.858 [2024-12-10 22:53:53.492751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.858 [2024-12-10 22:53:53.492758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa700, cid 4, qid 0 00:22:45.858 [2024-12-10 22:53:53.492888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.858 [2024-12-10 22:53:53.492901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.858 [2024-12-10 22:53:53.492908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa700) on tqpair=0x2048690 00:22:45.858 [2024-12-10 22:53:53.492923] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:45.858 [2024-12-10 22:53:53.492931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.492946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.492962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.492974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.492988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.492998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.858 [2024-12-10 22:53:53.493026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa700, cid 4, qid 0 00:22:45.858 [2024-12-10 22:53:53.493135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.858 [2024-12-10 22:53:53.493149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.858 [2024-12-10 22:53:53.493156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.493162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa700) on tqpair=0x2048690 00:22:45.858 [2024-12-10 22:53:53.493229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.493249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:45.858 [2024-12-10 22:53:53.493265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.858 [2024-12-10 22:53:53.493272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048690) 00:22:45.858 [2024-12-10 22:53:53.493283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.858 [2024-12-10 22:53:53.493304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa700, cid 4, qid 0 00:22:45.858 [2024-12-10 22:53:53.493399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.859 [2024-12-10 22:53:53.493412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.859 [2024-12-10 22:53:53.493419] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493426] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=4096, cccid=4 00:22:45.859 [2024-12-10 22:53:53.493433] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aa700) on tqpair(0x2048690): expected_datao=0, payload_size=4096 00:22:45.859 [2024-12-10 22:53:53.493440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493457] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493466] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.859 [2024-12-10 22:53:53.493487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.859 [2024-12-10 22:53:53.493494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa700) on tqpair=0x2048690 00:22:45.859 [2024-12-10 22:53:53.493525] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:45.859 [2024-12-10 22:53:53.493555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.493578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.493593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048690) 00:22:45.859 [2024-12-10 22:53:53.493611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.859 [2024-12-10 22:53:53.493633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa700, cid 4, qid 0 00:22:45.859 [2024-12-10 22:53:53.493747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.859 [2024-12-10 22:53:53.493760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.859 [2024-12-10 22:53:53.493767] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493773] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=4096, cccid=4 00:22:45.859 [2024-12-10 22:53:53.493785] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aa700) on tqpair(0x2048690): expected_datao=0, payload_size=4096 00:22:45.859 [2024-12-10 22:53:53.493793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493810] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493819] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.859 [2024-12-10 22:53:53.493855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.859 [2024-12-10 22:53:53.493862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa700) on tqpair=0x2048690 00:22:45.859 [2024-12-10 22:53:53.493893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.493913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.493928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.493936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048690) 00:22:45.859 [2024-12-10 22:53:53.493947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.859 [2024-12-10 22:53:53.493968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa700, cid 4, qid 0 00:22:45.859 [2024-12-10 22:53:53.494053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.859 [2024-12-10 22:53:53.494065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.859 [2024-12-10 22:53:53.494072] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494078] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=4096, cccid=4 00:22:45.859 [2024-12-10 22:53:53.494085] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aa700) on tqpair(0x2048690): expected_datao=0, payload_size=4096 00:22:45.859 [2024-12-10 22:53:53.494092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494108] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494117] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.859 [2024-12-10 22:53:53.494139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.859 [2024-12-10 22:53:53.494145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa700) on tqpair=0x2048690 00:22:45.859 [2024-12-10 22:53:53.494166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.494181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.494199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.494212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.494221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.494230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.494242] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:45.859 [2024-12-10 22:53:53.494251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:45.859 [2024-12-10 22:53:53.494260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:45.859 [2024-12-10 22:53:53.494278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048690) 00:22:45.859 [2024-12-10 22:53:53.494297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.859 [2024-12-10 22:53:53.494308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.494322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2048690) 00:22:45.859 [2024-12-10 22:53:53.494331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.859 [2024-12-10 22:53:53.494356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa700, cid 4, qid 0 00:22:45.859 [2024-12-10 22:53:53.494383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa880, cid 5, qid 0 00:22:45.859 [2024-12-10 22:53:53.498559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.859 [2024-12-10 22:53:53.498575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.859 [2024-12-10 22:53:53.498582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.498588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa700) on tqpair=0x2048690 00:22:45.859 [2024-12-10 22:53:53.498597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.859 [2024-12-10 22:53:53.498606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.859 [2024-12-10 22:53:53.498612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.498618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa880) on tqpair=0x2048690 00:22:45.859 [2024-12-10 22:53:53.498633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.498657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2048690) 00:22:45.859 [2024-12-10 22:53:53.498667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.859 [2024-12-10 22:53:53.498689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa880, cid 5, qid 0 00:22:45.859 [2024-12-10 22:53:53.498829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.859 [2024-12-10 22:53:53.498841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.859 [2024-12-10 22:53:53.498848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.498855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa880) on tqpair=0x2048690 00:22:45.859 [2024-12-10 22:53:53.498870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.859 [2024-12-10 22:53:53.498879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2048690) 00:22:45.859 [2024-12-10 22:53:53.498890] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.859 [2024-12-10 22:53:53.498910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa880, cid 5, qid 0 00:22:45.859 [2024-12-10 22:53:53.498985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.860 [2024-12-10 22:53:53.498999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.860 [2024-12-10 22:53:53.499006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa880) on tqpair=0x2048690 00:22:45.860 [2024-12-10 22:53:53.499033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2048690) 00:22:45.860 [2024-12-10 22:53:53.499053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.860 [2024-12-10 22:53:53.499073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa880, cid 5, qid 0 00:22:45.860 [2024-12-10 22:53:53.499150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.860 [2024-12-10 22:53:53.499161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.860 [2024-12-10 22:53:53.499169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa880) on tqpair=0x2048690 00:22:45.860 [2024-12-10 22:53:53.499199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2048690) 00:22:45.860 [2024-12-10 22:53:53.499221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.860 [2024-12-10 22:53:53.499233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048690) 00:22:45.860 [2024-12-10 22:53:53.499250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.860 [2024-12-10 22:53:53.499262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2048690) 00:22:45.860 [2024-12-10 22:53:53.499278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.860 [2024-12-10 22:53:53.499290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2048690) 00:22:45.860 [2024-12-10 22:53:53.499307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.860 [2024-12-10 22:53:53.499329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa880, cid 5, qid 0 00:22:45.860 [2024-12-10 22:53:53.499340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa700, cid 4, qid 0 00:22:45.860 [2024-12-10 22:53:53.499348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aaa00, cid 6, qid 0 00:22:45.860 [2024-12-10 22:53:53.499355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aab80, cid 7, qid 0 00:22:45.860 [2024-12-10 22:53:53.499550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.860 [2024-12-10 22:53:53.499564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.860 [2024-12-10 22:53:53.499572] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499578] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=8192, cccid=5 00:22:45.860 [2024-12-10 22:53:53.499586] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aa880) on tqpair(0x2048690): expected_datao=0, payload_size=8192 00:22:45.860 [2024-12-10 22:53:53.499593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499611] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499620] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.860 [2024-12-10 22:53:53.499647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.860 [2024-12-10 22:53:53.499653] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499660] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=512, cccid=4 00:22:45.860 [2024-12-10 22:53:53.499667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aa700) on tqpair(0x2048690): expected_datao=0, payload_size=512 00:22:45.860 [2024-12-10 22:53:53.499674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499683] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499691] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.860 [2024-12-10 22:53:53.499708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.860 [2024-12-10 22:53:53.499715] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499721] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=512, cccid=6 00:22:45.860 [2024-12-10 22:53:53.499728] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aaa00) on tqpair(0x2048690): expected_datao=0, payload_size=512 00:22:45.860 [2024-12-10 22:53:53.499735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499744] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499751] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.860 [2024-12-10 22:53:53.499768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.860 [2024-12-10 22:53:53.499775] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499781] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048690): datao=0, datal=4096, cccid=7 00:22:45.860 [2024-12-10 22:53:53.499788] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aab80) on tqpair(0x2048690): expected_datao=0, payload_size=4096 00:22:45.860 [2024-12-10 22:53:53.499795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499805] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499812] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.860 [2024-12-10 22:53:53.499833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.860 [2024-12-10 22:53:53.499840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa880) on tqpair=0x2048690 00:22:45.860 [2024-12-10 22:53:53.499867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.860 [2024-12-10 22:53:53.499878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.860 [2024-12-10 22:53:53.499885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa700) on tqpair=0x2048690 00:22:45.860 [2024-12-10 22:53:53.499921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.860 [2024-12-10 22:53:53.499933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.860 [2024-12-10 22:53:53.499939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aaa00) on tqpair=0x2048690 00:22:45.860 [2024-12-10 22:53:53.499955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.860 [2024-12-10 22:53:53.499964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.860 [2024-12-10 22:53:53.499970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.860 [2024-12-10 22:53:53.499979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aab80) on tqpair=0x2048690 00:22:45.860 ===================================================== 00:22:45.860 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.860 ===================================================== 00:22:45.860 Controller Capabilities/Features 00:22:45.860 ================================ 00:22:45.860 Vendor ID: 8086 00:22:45.860 Subsystem Vendor ID: 8086 00:22:45.860 Serial Number: SPDK00000000000001 00:22:45.860 Model Number: SPDK bdev Controller 00:22:45.860 Firmware Version: 25.01 00:22:45.860 Recommended Arb Burst: 6 00:22:45.860 IEEE OUI Identifier: e4 d2 5c 00:22:45.860 Multi-path I/O 00:22:45.860 May have multiple subsystem ports: Yes 00:22:45.860 May have multiple controllers: Yes 00:22:45.860 Associated with SR-IOV VF: No 00:22:45.860 Max Data Transfer Size: 131072 00:22:45.860 Max Number of Namespaces: 32 00:22:45.860 Max Number of I/O Queues: 127 00:22:45.860 NVMe Specification Version (VS): 1.3 00:22:45.860 NVMe Specification Version (Identify): 1.3 00:22:45.860 Maximum Queue Entries: 128 00:22:45.860 Contiguous Queues Required: Yes 00:22:45.860 Arbitration Mechanisms Supported 00:22:45.860 Weighted Round Robin: Not Supported 00:22:45.860 Vendor Specific: Not Supported 00:22:45.860 Reset Timeout: 15000 ms 00:22:45.860 Doorbell Stride: 4 bytes 00:22:45.860 NVM Subsystem Reset: Not Supported 00:22:45.860 Command Sets Supported 00:22:45.860 NVM Command Set: Supported 00:22:45.860 Boot Partition: Not Supported 00:22:45.860 Memory Page Size Minimum: 4096 bytes 00:22:45.860 Memory Page Size Maximum: 4096 bytes 00:22:45.860 Persistent Memory Region: Not Supported 00:22:45.860 Optional Asynchronous Events Supported 00:22:45.860 Namespace Attribute Notices: Supported 00:22:45.860 Firmware Activation Notices: Not Supported 00:22:45.860 ANA Change Notices: Not Supported 00:22:45.860 PLE Aggregate Log Change Notices: Not Supported 00:22:45.860 LBA Status Info Alert Notices: Not Supported 00:22:45.860 EGE Aggregate Log Change Notices: Not Supported 00:22:45.860 Normal NVM Subsystem Shutdown event: Not Supported 00:22:45.860 Zone Descriptor Change Notices: Not Supported 00:22:45.860 Discovery Log Change Notices: Not Supported 00:22:45.860 Controller Attributes 00:22:45.860 128-bit Host Identifier: Supported 00:22:45.860 Non-Operational Permissive Mode: Not Supported 00:22:45.860 NVM Sets: Not Supported 00:22:45.860 Read Recovery Levels: Not Supported 00:22:45.860 Endurance Groups: Not Supported 00:22:45.860 Predictable Latency Mode: Not Supported 00:22:45.860 Traffic Based Keep ALive: Not Supported 00:22:45.860 Namespace Granularity: Not Supported 00:22:45.860 SQ Associations: Not Supported 00:22:45.860 UUID List: Not Supported 00:22:45.860 Multi-Domain Subsystem: Not Supported 00:22:45.860 Fixed Capacity Management: Not Supported 00:22:45.860 Variable Capacity Management: Not Supported 00:22:45.860 Delete Endurance Group: Not Supported 00:22:45.861 Delete NVM Set: Not Supported 00:22:45.861 Extended LBA Formats Supported: Not Supported 00:22:45.861 Flexible Data Placement Supported: Not Supported 00:22:45.861 00:22:45.861 Controller Memory Buffer Support 00:22:45.861 ================================ 00:22:45.861 Supported: No 00:22:45.861 00:22:45.861 Persistent Memory Region Support 00:22:45.861 ================================ 00:22:45.861 Supported: No 00:22:45.861 00:22:45.861 Admin Command Set Attributes 00:22:45.861 ============================ 00:22:45.861 Security Send/Receive: Not Supported 00:22:45.861 Format NVM: Not Supported 00:22:45.861 Firmware Activate/Download: Not Supported 00:22:45.861 Namespace Management: Not Supported 00:22:45.861 Device Self-Test: Not Supported 00:22:45.861 Directives: Not Supported 00:22:45.861 NVMe-MI: Not Supported 00:22:45.861 Virtualization Management: Not Supported 00:22:45.861 Doorbell Buffer Config: Not Supported 00:22:45.861 Get LBA Status Capability: Not Supported 00:22:45.861 Command & Feature Lockdown Capability: Not Supported 00:22:45.861 Abort Command Limit: 4 00:22:45.861 Async Event Request Limit: 4 00:22:45.861 Number of Firmware Slots: N/A 00:22:45.861 Firmware Slot 1 Read-Only: N/A 00:22:45.861 Firmware Activation Without Reset: N/A 00:22:45.861 Multiple Update Detection Support: N/A 00:22:45.861 Firmware Update Granularity: No Information Provided 00:22:45.861 Per-Namespace SMART Log: No 00:22:45.861 Asymmetric Namespace Access Log Page: Not Supported 00:22:45.861 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:45.861 Command Effects Log Page: Supported 00:22:45.861 Get Log Page Extended Data: Supported 00:22:45.861 Telemetry Log Pages: Not Supported 00:22:45.861 Persistent Event Log Pages: Not Supported 00:22:45.861 Supported Log Pages Log Page: May Support 00:22:45.861 Commands Supported & Effects Log Page: Not Supported 00:22:45.861 Feature Identifiers & Effects Log Page:May Support 00:22:45.861 NVMe-MI Commands & Effects Log Page: May Support 00:22:45.861 Data Area 4 for Telemetry Log: Not Supported 00:22:45.861 Error Log Page Entries Supported: 128 00:22:45.861 Keep Alive: Supported 00:22:45.861 Keep Alive Granularity: 10000 ms 00:22:45.861 00:22:45.861 NVM Command Set Attributes 00:22:45.861 ========================== 00:22:45.861 Submission Queue Entry Size 00:22:45.861 Max: 64 00:22:45.861 Min: 64 00:22:45.861 Completion Queue Entry Size 00:22:45.861 Max: 16 00:22:45.861 Min: 16 00:22:45.861 Number of Namespaces: 32 00:22:45.861 Compare Command: Supported 00:22:45.861 Write Uncorrectable Command: Not Supported 00:22:45.861 Dataset Management Command: Supported 00:22:45.861 Write Zeroes Command: Supported 00:22:45.861 Set Features Save Field: Not Supported 00:22:45.861 Reservations: Supported 00:22:45.861 Timestamp: Not Supported 00:22:45.861 Copy: Supported 00:22:45.861 Volatile Write Cache: Present 00:22:45.861 Atomic Write Unit (Normal): 1 00:22:45.861 Atomic Write Unit (PFail): 1 00:22:45.861 Atomic Compare & Write Unit: 1 00:22:45.861 Fused Compare & Write: Supported 00:22:45.861 Scatter-Gather List 00:22:45.861 SGL Command Set: Supported 00:22:45.861 SGL Keyed: Supported 00:22:45.861 SGL Bit Bucket Descriptor: Not Supported 00:22:45.861 SGL Metadata Pointer: Not Supported 00:22:45.861 Oversized SGL: Not Supported 00:22:45.861 SGL Metadata Address: Not Supported 00:22:45.861 SGL Offset: Supported 00:22:45.861 Transport SGL Data Block: Not Supported 00:22:45.861 Replay Protected Memory Block: Not Supported 00:22:45.861 00:22:45.861 Firmware Slot Information 00:22:45.861 ========================= 00:22:45.861 Active slot: 1 00:22:45.861 Slot 1 Firmware Revision: 25.01 00:22:45.861 00:22:45.861 00:22:45.861 Commands Supported and Effects 00:22:45.861 ============================== 00:22:45.861 Admin Commands 00:22:45.861 -------------- 00:22:45.861 Get Log Page (02h): Supported 00:22:45.861 Identify (06h): Supported 00:22:45.861 Abort (08h): Supported 00:22:45.861 Set Features (09h): Supported 00:22:45.861 Get Features (0Ah): Supported 00:22:45.861 Asynchronous Event Request (0Ch): Supported 00:22:45.861 Keep Alive (18h): Supported 00:22:45.861 I/O Commands 00:22:45.861 ------------ 00:22:45.861 Flush (00h): Supported LBA-Change 00:22:45.861 Write (01h): Supported LBA-Change 00:22:45.861 Read (02h): Supported 00:22:45.861 Compare (05h): Supported 00:22:45.861 Write Zeroes (08h): Supported LBA-Change 00:22:45.861 Dataset Management (09h): Supported LBA-Change 00:22:45.861 Copy (19h): Supported LBA-Change 00:22:45.861 00:22:45.861 Error Log 00:22:45.861 ========= 00:22:45.861 00:22:45.861 Arbitration 00:22:45.861 =========== 00:22:45.861 Arbitration Burst: 1 00:22:45.861 00:22:45.861 Power Management 00:22:45.861 ================ 00:22:45.861 Number of Power States: 1 00:22:45.861 Current Power State: Power State #0 00:22:45.861 Power State #0: 00:22:45.861 Max Power: 0.00 W 00:22:45.861 Non-Operational State: Operational 00:22:45.861 Entry Latency: Not Reported 00:22:45.861 Exit Latency: Not Reported 00:22:45.861 Relative Read Throughput: 0 00:22:45.861 Relative Read Latency: 0 00:22:45.861 Relative Write Throughput: 0 00:22:45.861 Relative Write Latency: 0 00:22:45.861 Idle Power: Not Reported 00:22:45.861 Active Power: Not Reported 00:22:45.861 Non-Operational Permissive Mode: Not Supported 00:22:45.861 00:22:45.861 Health Information 00:22:45.861 ================== 00:22:45.861 Critical Warnings: 00:22:45.861 Available Spare Space: OK 00:22:45.861 Temperature: OK 00:22:45.861 Device Reliability: OK 00:22:45.861 Read Only: No 00:22:45.861 Volatile Memory Backup: OK 00:22:45.861 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:45.861 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:45.861 Available Spare: 0% 00:22:45.861 Available Spare Threshold: 0% 00:22:45.861 Life Percentage Used:[2024-12-10 22:53:53.500100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2048690) 00:22:45.861 [2024-12-10 22:53:53.500123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.861 [2024-12-10 22:53:53.500145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aab80, cid 7, qid 0 00:22:45.861 [2024-12-10 22:53:53.500279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.861 [2024-12-10 22:53:53.500294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.861 [2024-12-10 22:53:53.500301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aab80) on tqpair=0x2048690 00:22:45.861 [2024-12-10 22:53:53.500350] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:45.861 [2024-12-10 22:53:53.500370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa100) on tqpair=0x2048690 00:22:45.861 [2024-12-10 22:53:53.500381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.861 [2024-12-10 22:53:53.500390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa280) on tqpair=0x2048690 00:22:45.861 [2024-12-10 22:53:53.500398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.861 [2024-12-10 22:53:53.500406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa400) on tqpair=0x2048690 00:22:45.861 [2024-12-10 22:53:53.500414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.861 [2024-12-10 22:53:53.500422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.861 [2024-12-10 22:53:53.500430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.861 [2024-12-10 22:53:53.500442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.861 [2024-12-10 22:53:53.500467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.861 [2024-12-10 22:53:53.500489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.861 [2024-12-10 22:53:53.500608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.861 [2024-12-10 22:53:53.500623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.861 [2024-12-10 22:53:53.500631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.861 [2024-12-10 22:53:53.500648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.861 [2024-12-10 22:53:53.500672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.861 [2024-12-10 22:53:53.500698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.861 [2024-12-10 22:53:53.500789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.861 [2024-12-10 22:53:53.500803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.861 [2024-12-10 22:53:53.500814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.861 [2024-12-10 22:53:53.500821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.500828] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:45.862 [2024-12-10 22:53:53.500836] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:45.862 [2024-12-10 22:53:53.500852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.500861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.500867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.500877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.500898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.500981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.500993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.501000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.501022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.501048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.501067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.501147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.501160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.501167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.501190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.501216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.501236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.501306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.501318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.501325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.501346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.501372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.501392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.501467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.501481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.501488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.501511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.501537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.501566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.501641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.501654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.501661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.501684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.501710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.501730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.501800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.501812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.501819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.501841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.501866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.501887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.501960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.501971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.501978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.501985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.502000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.502026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.502045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.502122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.502139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.502147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.502170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.502196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.502217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.502289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.502301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.502308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.502330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.502356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.502376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.502452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.502466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.502473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.502495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.502511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048690) 00:22:45.862 [2024-12-10 22:53:53.502521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.862 [2024-12-10 22:53:53.502541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aa580, cid 3, qid 0 00:22:45.862 [2024-12-10 22:53:53.506585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.862 [2024-12-10 22:53:53.506598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.862 [2024-12-10 22:53:53.506605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.862 [2024-12-10 22:53:53.506611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aa580) on tqpair=0x2048690 00:22:45.862 [2024-12-10 22:53:53.506625] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:22:45.862 0% 00:22:45.862 Data Units Read: 0 00:22:45.862 Data Units Written: 0 00:22:45.862 Host Read Commands: 0 00:22:45.862 Host Write Commands: 0 00:22:45.862 Controller Busy Time: 0 minutes 00:22:45.862 Power Cycles: 0 00:22:45.862 Power On Hours: 0 hours 00:22:45.862 Unsafe Shutdowns: 0 00:22:45.862 Unrecoverable Media Errors: 0 00:22:45.862 Lifetime Error Log Entries: 0 00:22:45.862 Warning Temperature Time: 0 minutes 00:22:45.862 Critical Temperature Time: 0 minutes 00:22:45.862 00:22:45.862 Number of Queues 00:22:45.862 ================ 00:22:45.862 Number of I/O Submission Queues: 127 00:22:45.862 Number of I/O Completion Queues: 127 00:22:45.863 00:22:45.863 Active Namespaces 00:22:45.863 ================= 00:22:45.863 Namespace ID:1 00:22:45.863 Error Recovery Timeout: Unlimited 00:22:45.863 Command Set Identifier: NVM (00h) 00:22:45.863 Deallocate: Supported 00:22:45.863 Deallocated/Unwritten Error: Not Supported 00:22:45.863 Deallocated Read Value: Unknown 00:22:45.863 Deallocate in Write Zeroes: Not Supported 00:22:45.863 Deallocated Guard Field: 0xFFFF 00:22:45.863 Flush: Supported 00:22:45.863 Reservation: Supported 00:22:45.863 Namespace Sharing Capabilities: Multiple Controllers 00:22:45.863 Size (in LBAs): 131072 (0GiB) 00:22:45.863 Capacity (in LBAs): 131072 (0GiB) 00:22:45.863 Utilization (in LBAs): 131072 (0GiB) 00:22:45.863 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:45.863 EUI64: ABCDEF0123456789 00:22:45.863 UUID: 02399236-490f-4a27-8129-a9534be1deba 00:22:45.863 Thin Provisioning: Not Supported 00:22:45.863 Per-NS Atomic Units: Yes 00:22:45.863 Atomic Boundary Size (Normal): 0 00:22:45.863 Atomic Boundary Size (PFail): 0 00:22:45.863 Atomic Boundary Offset: 0 00:22:45.863 Maximum Single Source Range Length: 65535 00:22:45.863 Maximum Copy Length: 65535 00:22:45.863 Maximum Source Range Count: 1 00:22:45.863 NGUID/EUI64 Never Reused: No 00:22:45.863 Namespace Write Protected: No 00:22:45.863 Number of LBA Formats: 1 00:22:45.863 Current LBA Format: LBA Format #00 00:22:45.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:45.863 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.863 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.863 rmmod nvme_tcp 00:22:45.863 rmmod nvme_fabrics 00:22:45.863 rmmod nvme_keyring 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 127086 ']' 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 127086 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 127086 ']' 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 127086 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127086 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127086' 00:22:46.163 killing process with pid 127086 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 127086 00:22:46.163 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 127086 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.431 22:53:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.338 00:22:48.338 real 0m5.423s 00:22:48.338 user 0m4.535s 00:22:48.338 sys 0m1.881s 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.338 ************************************ 00:22:48.338 END TEST nvmf_identify 00:22:48.338 ************************************ 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.338 ************************************ 00:22:48.338 START TEST nvmf_perf 00:22:48.338 ************************************ 00:22:48.338 22:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:48.338 * Looking for test storage... 00:22:48.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.338 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:48.338 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:48.338 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:48.597 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.598 --rc genhtml_branch_coverage=1 00:22:48.598 --rc genhtml_function_coverage=1 00:22:48.598 --rc genhtml_legend=1 00:22:48.598 --rc geninfo_all_blocks=1 00:22:48.598 --rc geninfo_unexecuted_blocks=1 00:22:48.598 00:22:48.598 ' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.598 --rc genhtml_branch_coverage=1 00:22:48.598 --rc genhtml_function_coverage=1 00:22:48.598 --rc genhtml_legend=1 00:22:48.598 --rc geninfo_all_blocks=1 00:22:48.598 --rc geninfo_unexecuted_blocks=1 00:22:48.598 00:22:48.598 ' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.598 --rc genhtml_branch_coverage=1 00:22:48.598 --rc genhtml_function_coverage=1 00:22:48.598 --rc genhtml_legend=1 00:22:48.598 --rc geninfo_all_blocks=1 00:22:48.598 --rc geninfo_unexecuted_blocks=1 00:22:48.598 00:22:48.598 ' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.598 --rc genhtml_branch_coverage=1 00:22:48.598 --rc genhtml_function_coverage=1 00:22:48.598 --rc genhtml_legend=1 00:22:48.598 --rc geninfo_all_blocks=1 00:22:48.598 --rc geninfo_unexecuted_blocks=1 00:22:48.598 00:22:48.598 ' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:48.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.598 22:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:51.142 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.143 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.143 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:51.143 00:22:51.143 --- 10.0.0.2 ping statistics --- 00:22:51.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.143 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:51.143 00:22:51.143 --- 10.0.0.1 ping statistics --- 00:22:51.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.143 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=129180 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 129180 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 129180 ']' 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.143 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.143 [2024-12-10 22:53:58.520721] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:22:51.144 [2024-12-10 22:53:58.520805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.144 [2024-12-10 22:53:58.596719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.144 [2024-12-10 22:53:58.653435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.144 [2024-12-10 22:53:58.653489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.144 [2024-12-10 22:53:58.653517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.144 [2024-12-10 22:53:58.653537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.144 [2024-12-10 22:53:58.653553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.144 [2024-12-10 22:53:58.655187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.144 [2024-12-10 22:53:58.655248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.144 [2024-12-10 22:53:58.655317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.144 [2024-12-10 22:53:58.655320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:51.144 22:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:54.431 22:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:54.431 22:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:54.688 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:54.688 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:54.946 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:54.946 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:54.946 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:54.946 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:54.946 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.204 [2024-12-10 22:54:02.791219] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.204 22:54:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.462 22:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:55.462 22:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.721 22:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:55.721 22:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:55.979 22:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.237 [2024-12-10 22:54:03.959428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.496 22:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:56.754 22:54:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:56.754 22:54:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:56.754 22:54:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:56.754 22:54:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:58.132 Initializing NVMe Controllers 00:22:58.132 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:58.132 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:58.132 Initialization complete. Launching workers. 00:22:58.132 ======================================================== 00:22:58.132 Latency(us) 00:22:58.132 Device Information : IOPS MiB/s Average min max 00:22:58.132 PCIE (0000:88:00.0) NSID 1 from core 0: 83594.30 326.54 382.34 39.14 6760.65 00:22:58.132 ======================================================== 00:22:58.132 Total : 83594.30 326.54 382.34 39.14 6760.65 00:22:58.132 00:22:58.132 22:54:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.508 Initializing NVMe Controllers 00:22:59.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:59.508 Initialization complete. Launching workers. 00:22:59.508 ======================================================== 00:22:59.508 Latency(us) 00:22:59.508 Device Information : IOPS MiB/s Average min max 00:22:59.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11310.24 140.94 45850.47 00:22:59.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19735.02 6979.27 50887.27 00:22:59.508 ======================================================== 00:22:59.508 Total : 143.00 0.56 14314.88 140.94 50887.27 00:22:59.508 00:22:59.508 22:54:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.882 Initializing NVMe Controllers 00:23:00.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.882 Initialization complete. Launching workers. 00:23:00.882 ======================================================== 00:23:00.882 Latency(us) 00:23:00.882 Device Information : IOPS MiB/s Average min max 00:23:00.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8450.91 33.01 3787.27 627.33 11135.71 00:23:00.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3802.51 14.85 8416.55 4617.34 16153.16 00:23:00.882 ======================================================== 00:23:00.882 Total : 12253.43 47.86 5223.84 627.33 16153.16 00:23:00.882 00:23:00.882 22:54:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:00.882 22:54:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:00.882 22:54:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:03.411 Initializing NVMe Controllers 00:23:03.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.411 Controller IO queue size 128, less than required. 00:23:03.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.411 Controller IO queue size 128, less than required. 00:23:03.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:03.411 Initialization complete. Launching workers. 00:23:03.411 ======================================================== 00:23:03.411 Latency(us) 00:23:03.411 Device Information : IOPS MiB/s Average min max 00:23:03.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1713.43 428.36 75937.27 42056.67 127325.24 00:23:03.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.31 145.33 235177.21 78360.05 364344.61 00:23:03.411 ======================================================== 00:23:03.411 Total : 2294.73 573.68 116276.21 42056.67 364344.61 00:23:03.411 00:23:03.411 22:54:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:03.411 No valid NVMe controllers or AIO or URING devices found 00:23:03.411 Initializing NVMe Controllers 00:23:03.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.411 Controller IO queue size 128, less than required. 00:23:03.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.411 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:03.411 Controller IO queue size 128, less than required. 00:23:03.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.411 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:03.411 WARNING: Some requested NVMe devices were skipped 00:23:03.411 22:54:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:05.948 Initializing NVMe Controllers 00:23:05.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.948 Controller IO queue size 128, less than required. 00:23:05.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.948 Controller IO queue size 128, less than required. 00:23:05.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:05.948 Initialization complete. Launching workers. 00:23:05.948 00:23:05.948 ==================== 00:23:05.948 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:05.948 TCP transport: 00:23:05.948 polls: 8797 00:23:05.948 idle_polls: 5466 00:23:05.948 sock_completions: 3331 00:23:05.948 nvme_completions: 6079 00:23:05.948 submitted_requests: 9098 00:23:05.948 queued_requests: 1 00:23:05.948 00:23:05.948 ==================== 00:23:05.948 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:05.948 TCP transport: 00:23:05.948 polls: 11957 00:23:05.948 idle_polls: 8571 00:23:05.948 sock_completions: 3386 00:23:05.948 nvme_completions: 6385 00:23:05.948 submitted_requests: 9620 00:23:05.948 queued_requests: 1 00:23:05.948 ======================================================== 00:23:05.948 Latency(us) 00:23:05.948 Device Information : IOPS MiB/s Average min max 00:23:05.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1516.96 379.24 86594.87 52998.24 129840.04 00:23:05.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1593.33 398.33 80905.64 40482.21 120583.57 00:23:05.948 ======================================================== 00:23:05.948 Total : 3110.30 777.57 83680.41 40482.21 129840.04 00:23:05.948 00:23:05.948 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:05.948 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.206 rmmod nvme_tcp 00:23:06.206 rmmod nvme_fabrics 00:23:06.206 rmmod nvme_keyring 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 129180 ']' 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 129180 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 129180 ']' 00:23:06.206 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 129180 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129180 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129180' 00:23:06.463 killing process with pid 129180 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 129180 00:23:06.463 22:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 129180 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.842 22:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:10.399 00:23:10.399 real 0m21.598s 00:23:10.399 user 1m6.970s 00:23:10.399 sys 0m5.470s 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:10.399 ************************************ 00:23:10.399 END TEST nvmf_perf 00:23:10.399 ************************************ 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.399 ************************************ 00:23:10.399 START TEST nvmf_fio_host 00:23:10.399 ************************************ 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:10.399 * Looking for test storage... 00:23:10.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.399 --rc genhtml_branch_coverage=1 00:23:10.399 --rc genhtml_function_coverage=1 00:23:10.399 --rc genhtml_legend=1 00:23:10.399 --rc geninfo_all_blocks=1 00:23:10.399 --rc geninfo_unexecuted_blocks=1 00:23:10.399 00:23:10.399 ' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.399 --rc genhtml_branch_coverage=1 00:23:10.399 --rc genhtml_function_coverage=1 00:23:10.399 --rc genhtml_legend=1 00:23:10.399 --rc geninfo_all_blocks=1 00:23:10.399 --rc geninfo_unexecuted_blocks=1 00:23:10.399 00:23:10.399 ' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.399 --rc genhtml_branch_coverage=1 00:23:10.399 --rc genhtml_function_coverage=1 00:23:10.399 --rc genhtml_legend=1 00:23:10.399 --rc geninfo_all_blocks=1 00:23:10.399 --rc geninfo_unexecuted_blocks=1 00:23:10.399 00:23:10.399 ' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.399 --rc genhtml_branch_coverage=1 00:23:10.399 --rc genhtml_function_coverage=1 00:23:10.399 --rc genhtml_legend=1 00:23:10.399 --rc geninfo_all_blocks=1 00:23:10.399 --rc geninfo_unexecuted_blocks=1 00:23:10.399 00:23:10.399 ' 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.399 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:10.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:10.400 22:54:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:12.367 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:12.367 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:12.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.367 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:12.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.368 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:23:12.651 00:23:12.651 --- 10.0.0.2 ping statistics --- 00:23:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.651 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:23:12.651 00:23:12.651 --- 10.0.0.1 ping statistics --- 00:23:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.651 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=133880 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 133880 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 133880 ']' 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.651 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.651 [2024-12-10 22:54:20.226952] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:23:12.652 [2024-12-10 22:54:20.227037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.652 [2024-12-10 22:54:20.308068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.652 [2024-12-10 22:54:20.371620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.652 [2024-12-10 22:54:20.371677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.652 [2024-12-10 22:54:20.371706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.652 [2024-12-10 22:54:20.371718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.652 [2024-12-10 22:54:20.371728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.652 [2024-12-10 22:54:20.373506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.652 [2024-12-10 22:54:20.373601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.652 [2024-12-10 22:54:20.373565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.652 [2024-12-10 22:54:20.373604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.910 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.910 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:12.910 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:13.168 [2024-12-10 22:54:20.783407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.168 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:13.168 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.168 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.168 22:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:13.426 Malloc1 00:23:13.426 22:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:13.684 22:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:13.942 22:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.202 [2024-12-10 22:54:21.910386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.202 22:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:14.769 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:14.769 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:14.769 fio-3.35 00:23:14.769 Starting 1 thread 00:23:17.295 00:23:17.295 test: (groupid=0, jobs=1): err= 0: pid=134247: Tue Dec 10 22:54:24 2024 00:23:17.295 read: IOPS=8975, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec) 00:23:17.295 slat (nsec): min=1959, max=105501, avg=2543.34, stdev=1406.37 00:23:17.295 clat (usec): min=2286, max=13307, avg=7784.76, stdev=635.31 00:23:17.295 lat (usec): min=2311, max=13309, avg=7787.31, stdev=635.24 00:23:17.295 clat percentiles (usec): 00:23:17.295 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:23:17.295 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:23:17.295 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:23:17.295 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11207], 99.95th=[12256], 00:23:17.295 | 99.99th=[13304] 00:23:17.295 bw ( KiB/s): min=35208, max=36424, per=99.99%, avg=35900.00, stdev=506.87, samples=4 00:23:17.295 iops : min= 8802, max= 9106, avg=8975.00, stdev=126.72, samples=4 00:23:17.295 write: IOPS=8999, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec); 0 zone resets 00:23:17.295 slat (usec): min=2, max=105, avg= 2.61, stdev= 1.20 00:23:17.295 clat (usec): min=937, max=13082, avg=6404.91, stdev=533.12 00:23:17.295 lat (usec): min=943, max=13084, avg=6407.52, stdev=533.09 00:23:17.295 clat percentiles (usec): 00:23:17.295 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:23:17.295 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:23:17.295 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:23:17.295 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[10552], 99.95th=[11994], 00:23:17.295 | 99.99th=[13042] 00:23:17.295 bw ( KiB/s): min=35608, max=36216, per=100.00%, avg=35996.00, stdev=283.93, samples=4 00:23:17.295 iops : min= 8902, max= 9054, avg=8999.00, stdev=70.98, samples=4 00:23:17.295 lat (usec) : 1000=0.01% 00:23:17.295 lat (msec) : 2=0.02%, 4=0.11%, 10=99.71%, 20=0.16% 00:23:17.295 cpu : usr=65.80%, sys=32.60%, ctx=103, majf=0, minf=36 00:23:17.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:17.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:17.295 issued rwts: total=18014,18061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:17.295 00:23:17.295 Run status group 0 (all jobs): 00:23:17.295 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:23:17.295 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.0MB), run=2007-2007msec 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:17.295 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:17.553 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:17.553 fio-3.35 00:23:17.553 Starting 1 thread 00:23:20.082 00:23:20.082 test: (groupid=0, jobs=1): err= 0: pid=134582: Tue Dec 10 22:54:27 2024 00:23:20.082 read: IOPS=7935, BW=124MiB/s (130MB/s)(249MiB/2009msec) 00:23:20.082 slat (nsec): min=2881, max=96635, avg=3814.83, stdev=1833.60 00:23:20.082 clat (usec): min=2924, max=18386, avg=9212.04, stdev=2184.35 00:23:20.082 lat (usec): min=2928, max=18389, avg=9215.85, stdev=2184.32 00:23:20.082 clat percentiles (usec): 00:23:20.082 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7439], 00:23:20.082 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9503], 00:23:20.082 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11994], 95.00th=[13042], 00:23:20.082 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17695], 99.95th=[17957], 00:23:20.082 | 99.99th=[18482] 00:23:20.082 bw ( KiB/s): min=57856, max=74880, per=51.30%, avg=65128.00, stdev=7109.06, samples=4 00:23:20.082 iops : min= 3616, max= 4680, avg=4070.50, stdev=444.32, samples=4 00:23:20.082 write: IOPS=4678, BW=73.1MiB/s (76.6MB/s)(134MiB/1827msec); 0 zone resets 00:23:20.082 slat (usec): min=30, max=185, avg=34.41, stdev= 6.21 00:23:20.082 clat (usec): min=5694, max=21130, avg=12216.37, stdev=2287.13 00:23:20.082 lat (usec): min=5727, max=21162, avg=12250.78, stdev=2286.81 00:23:20.082 clat percentiles (usec): 00:23:20.082 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:23:20.082 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:23:20.082 | 70.00th=[13173], 80.00th=[14222], 90.00th=[15401], 95.00th=[16450], 00:23:20.082 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20579], 99.95th=[20841], 00:23:20.082 | 99.99th=[21103] 00:23:20.082 bw ( KiB/s): min=60000, max=77792, per=90.79%, avg=67960.00, stdev=7348.65, samples=4 00:23:20.082 iops : min= 3750, max= 4862, avg=4247.50, stdev=459.29, samples=4 00:23:20.082 lat (msec) : 4=0.10%, 10=49.86%, 20=49.97%, 50=0.08% 00:23:20.082 cpu : usr=75.70%, sys=23.06%, ctx=46, majf=0, minf=61 00:23:20.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:20.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:20.082 issued rwts: total=15942,8547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:20.082 00:23:20.082 Run status group 0 (all jobs): 00:23:20.082 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2009-2009msec 00:23:20.082 WRITE: bw=73.1MiB/s (76.6MB/s), 73.1MiB/s-73.1MiB/s (76.6MB/s-76.6MB/s), io=134MiB (140MB), run=1827-1827msec 00:23:20.082 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.340 rmmod nvme_tcp 00:23:20.340 rmmod nvme_fabrics 00:23:20.340 rmmod nvme_keyring 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 133880 ']' 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 133880 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 133880 ']' 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 133880 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.340 22:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 133880 00:23:20.340 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.340 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.340 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 133880' 00:23:20.340 killing process with pid 133880 00:23:20.340 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 133880 00:23:20.340 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 133880 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.598 22:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.139 00:23:23.139 real 0m12.643s 00:23:23.139 user 0m37.347s 00:23:23.139 sys 0m4.215s 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.139 ************************************ 00:23:23.139 END TEST nvmf_fio_host 00:23:23.139 ************************************ 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.139 ************************************ 00:23:23.139 START TEST nvmf_failover 00:23:23.139 ************************************ 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:23.139 * Looking for test storage... 00:23:23.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:23.139 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.140 --rc genhtml_branch_coverage=1 00:23:23.140 --rc genhtml_function_coverage=1 00:23:23.140 --rc genhtml_legend=1 00:23:23.140 --rc geninfo_all_blocks=1 00:23:23.140 --rc geninfo_unexecuted_blocks=1 00:23:23.140 00:23:23.140 ' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.140 --rc genhtml_branch_coverage=1 00:23:23.140 --rc genhtml_function_coverage=1 00:23:23.140 --rc genhtml_legend=1 00:23:23.140 --rc geninfo_all_blocks=1 00:23:23.140 --rc geninfo_unexecuted_blocks=1 00:23:23.140 00:23:23.140 ' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.140 --rc genhtml_branch_coverage=1 00:23:23.140 --rc genhtml_function_coverage=1 00:23:23.140 --rc genhtml_legend=1 00:23:23.140 --rc geninfo_all_blocks=1 00:23:23.140 --rc geninfo_unexecuted_blocks=1 00:23:23.140 00:23:23.140 ' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.140 --rc genhtml_branch_coverage=1 00:23:23.140 --rc genhtml_function_coverage=1 00:23:23.140 --rc genhtml_legend=1 00:23:23.140 --rc geninfo_all_blocks=1 00:23:23.140 --rc geninfo_unexecuted_blocks=1 00:23:23.140 00:23:23.140 ' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.140 22:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:25.045 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.045 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.045 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.045 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.045 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:25.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:25.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:25.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:25.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:23:25.046 00:23:25.046 --- 10.0.0.2 ping statistics --- 00:23:25.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.046 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:23:25.046 00:23:25.046 --- 10.0.0.1 ping statistics --- 00:23:25.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.046 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=136902 00:23:25.046 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:25.047 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 136902 00:23:25.047 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 136902 ']' 00:23:25.047 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.047 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.047 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.047 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.047 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:25.047 [2024-12-10 22:54:32.680629] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:23:25.047 [2024-12-10 22:54:32.680719] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.047 [2024-12-10 22:54:32.752220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:25.305 [2024-12-10 22:54:32.805766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.305 [2024-12-10 22:54:32.805838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.305 [2024-12-10 22:54:32.805852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.305 [2024-12-10 22:54:32.805862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.305 [2024-12-10 22:54:32.805871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.305 [2024-12-10 22:54:32.807373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.305 [2024-12-10 22:54:32.807436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.305 [2024-12-10 22:54:32.807446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.305 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.305 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:25.305 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.305 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.305 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:25.305 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.305 22:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:25.563 [2024-12-10 22:54:33.214433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.563 22:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:26.129 Malloc0 00:23:26.129 22:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.387 22:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:26.645 22:54:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.902 [2024-12-10 22:54:34.454316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.903 22:54:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:27.161 [2024-12-10 22:54:34.783272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:27.161 22:54:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:27.419 [2024-12-10 22:54:35.068189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=137192 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 137192 /var/tmp/bdevperf.sock 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 137192 ']' 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.419 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:27.677 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.677 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:27.677 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:28.245 NVMe0n1 00:23:28.245 22:54:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:28.503 00:23:28.503 22:54:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=137326 00:23:28.503 22:54:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.503 22:54:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:29.880 22:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.880 [2024-12-10 22:54:37.473085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.880 [2024-12-10 22:54:37.473395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.881 [2024-12-10 22:54:37.473414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403e00 is same with the state(6) to be set 00:23:29.881 22:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:33.171 22:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.429 00:23:33.429 22:54:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.687 22:54:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:36.976 22:54:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.976 [2024-12-10 22:54:44.548643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.976 22:54:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:37.954 22:54:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:38.236 [2024-12-10 22:54:45.853746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9f60 is same with the state(6) to be set 00:23:38.236 [2024-12-10 22:54:45.853835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9f60 is same with the state(6) to be set 00:23:38.236 [2024-12-10 22:54:45.853852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9f60 is same with the state(6) to be set 00:23:38.236 [2024-12-10 22:54:45.853864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9f60 is same with the state(6) to be set 00:23:38.236 [2024-12-10 22:54:45.853876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9f60 is same with the state(6) to be set 00:23:38.236 [2024-12-10 22:54:45.853887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9f60 is same with the state(6) to be set 00:23:38.236 [2024-12-10 22:54:45.853899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9f60 is same with the state(6) to be set 00:23:38.236 22:54:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 137326 00:23:44.814 { 00:23:44.814 "results": [ 00:23:44.814 { 00:23:44.814 "job": "NVMe0n1", 00:23:44.814 "core_mask": "0x1", 00:23:44.814 "workload": "verify", 00:23:44.814 "status": "finished", 00:23:44.814 "verify_range": { 00:23:44.814 "start": 0, 00:23:44.814 "length": 16384 00:23:44.814 }, 00:23:44.814 "queue_depth": 128, 00:23:44.814 "io_size": 4096, 00:23:44.814 "runtime": 15.008945, 00:23:44.814 "iops": 8552.299978446186, 00:23:44.814 "mibps": 33.407421790805415, 00:23:44.814 "io_failed": 8357, 00:23:44.814 "io_timeout": 0, 00:23:44.814 "avg_latency_us": 14022.901955113877, 00:23:44.814 "min_latency_us": 543.0992592592593, 00:23:44.814 "max_latency_us": 18350.08 00:23:44.814 } 00:23:44.814 ], 00:23:44.814 "core_count": 1 00:23:44.814 } 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 137192 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 137192 ']' 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 137192 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137192 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137192' 00:23:44.814 killing process with pid 137192 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 137192 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 137192 00:23:44.814 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.814 [2024-12-10 22:54:35.132194] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:23:44.814 [2024-12-10 22:54:35.132284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137192 ] 00:23:44.814 [2024-12-10 22:54:35.201590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.814 [2024-12-10 22:54:35.259468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.814 Running I/O for 15 seconds... 00:23:44.814 8585.00 IOPS, 33.54 MiB/s [2024-12-10T21:54:52.547Z] [2024-12-10 22:54:37.473796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.473837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.473863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.473879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.473896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.473911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.473926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.473940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.473955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.473969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.473985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.473999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.474028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.474057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.815 [2024-12-10 22:54:37.474086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.474972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.815 [2024-12-10 22:54:37.474985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.815 [2024-12-10 22:54:37.475000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.816 [2024-12-10 22:54:37.475790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.475979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.475992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.476007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.476021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.476036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.476049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.476067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.476081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.476096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.816 [2024-12-10 22:54:37.476109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.816 [2024-12-10 22:54:37.476123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.817 [2024-12-10 22:54:37.476777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.476806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.476839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.476890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.476919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.476947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.476975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.476990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.817 [2024-12-10 22:54:37.477351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.817 [2024-12-10 22:54:37.477365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:37.477664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.818 [2024-12-10 22:54:37.477712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.818 [2024-12-10 22:54:37.477723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:23:44.818 [2024-12-10 22:54:37.477742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477814] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:44.818 [2024-12-10 22:54:37.477868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.818 [2024-12-10 22:54:37.477885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.818 [2024-12-10 22:54:37.477928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.818 [2024-12-10 22:54:37.477955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.477968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.818 [2024-12-10 22:54:37.477980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:37.478004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:44.818 [2024-12-10 22:54:37.478068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83230 (9): Bad file descriptor 00:23:44.818 [2024-12-10 22:54:37.481625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:44.818 [2024-12-10 22:54:37.547268] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:44.818 8285.50 IOPS, 32.37 MiB/s [2024-12-10T21:54:52.550Z] 8394.67 IOPS, 32.79 MiB/s [2024-12-10T21:54:52.550Z] 8460.00 IOPS, 33.05 MiB/s [2024-12-10T21:54:52.550Z] [2024-12-10 22:54:41.268991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.818 [2024-12-10 22:54:41.269478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.818 [2024-12-10 22:54:41.269730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.818 [2024-12-10 22:54:41.269745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.819 [2024-12-10 22:54:41.269968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.269982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.269995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.819 [2024-12-10 22:54:41.270678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.819 [2024-12-10 22:54:41.270693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.270985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.270999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.820 [2024-12-10 22:54:41.271396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.820 [2024-12-10 22:54:41.271822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.820 [2024-12-10 22:54:41.271837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.271850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.271865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.271878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.271893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.271913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.271928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.271941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.271956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.271969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.271983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.271996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.821 [2024-12-10 22:54:41.272768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa63f0 is same with the state(6) to be set 00:23:44.821 [2024-12-10 22:54:41.272799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.821 [2024-12-10 22:54:41.272825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.821 [2024-12-10 22:54:41.272837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:23:44.821 [2024-12-10 22:54:41.272865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.272943] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:44.821 [2024-12-10 22:54:41.272995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.821 [2024-12-10 22:54:41.273015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.273030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.821 [2024-12-10 22:54:41.273043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.273056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.821 [2024-12-10 22:54:41.273069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.273083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.821 [2024-12-10 22:54:41.273096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.821 [2024-12-10 22:54:41.273108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:44.821 [2024-12-10 22:54:41.273166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83230 (9): Bad file descriptor 00:23:44.821 [2024-12-10 22:54:41.277013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:44.821 [2024-12-10 22:54:41.341518] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:44.821 8357.40 IOPS, 32.65 MiB/s [2024-12-10T21:54:52.553Z] 8413.33 IOPS, 32.86 MiB/s [2024-12-10T21:54:52.553Z] 8467.43 IOPS, 33.08 MiB/s [2024-12-10T21:54:52.554Z] 8509.12 IOPS, 33.24 MiB/s [2024-12-10T21:54:52.554Z] 8522.56 IOPS, 33.29 MiB/s [2024-12-10T21:54:52.554Z] [2024-12-10 22:54:45.855132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.855414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.855981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.855997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.856010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.856039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.856069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.822 [2024-12-10 22:54:45.856102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.856131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.856160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.856188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.856217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.856245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.856275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.822 [2024-12-10 22:54:45.856304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.822 [2024-12-10 22:54:45.856319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.856983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.856999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.823 [2024-12-10 22:54:45.857317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.823 [2024-12-10 22:54:45.857331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.824 [2024-12-10 22:54:45.857570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.857980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.857994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.824 [2024-12-10 22:54:45.858512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.824 [2024-12-10 22:54:45.858550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.824 [2024-12-10 22:54:45.858569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38120 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38128 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38136 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38144 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38152 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38160 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38168 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38176 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.858964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38184 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.858976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.858988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.858999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.859010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38192 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.859022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.859051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.859062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38200 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.859075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.859101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.859112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38208 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.859125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.859149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.859159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38216 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.859172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.859195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.859206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38224 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.859219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.825 [2024-12-10 22:54:45.859242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.825 [2024-12-10 22:54:45.859253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38232 len:8 PRP1 0x0 PRP2 0x0 00:23:44.825 [2024-12-10 22:54:45.859266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859335] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:44.825 [2024-12-10 22:54:45.859373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.825 [2024-12-10 22:54:45.859391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.825 [2024-12-10 22:54:45.859420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.825 [2024-12-10 22:54:45.859446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.825 [2024-12-10 22:54:45.859472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.825 [2024-12-10 22:54:45.859485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:44.825 [2024-12-10 22:54:45.859538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83230 (9): Bad file descriptor 00:23:44.825 [2024-12-10 22:54:45.863086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:44.825 [2024-12-10 22:54:45.931968] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:44.825 8473.70 IOPS, 33.10 MiB/s [2024-12-10T21:54:52.557Z] 8499.64 IOPS, 33.20 MiB/s [2024-12-10T21:54:52.557Z] 8520.33 IOPS, 33.28 MiB/s [2024-12-10T21:54:52.557Z] 8530.08 IOPS, 33.32 MiB/s [2024-12-10T21:54:52.557Z] 8539.14 IOPS, 33.36 MiB/s 00:23:44.825 Latency(us) 00:23:44.825 [2024-12-10T21:54:52.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.825 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:44.825 Verification LBA range: start 0x0 length 0x4000 00:23:44.825 NVMe0n1 : 15.01 8552.30 33.41 556.80 0.00 14022.90 543.10 18350.08 00:23:44.825 [2024-12-10T21:54:52.557Z] =================================================================================================================== 00:23:44.825 [2024-12-10T21:54:52.557Z] Total : 8552.30 33.41 556.80 0.00 14022.90 543.10 18350.08 00:23:44.825 Received shutdown signal, test time was about 15.000000 seconds 00:23:44.825 00:23:44.825 Latency(us) 00:23:44.825 [2024-12-10T21:54:52.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.825 [2024-12-10T21:54:52.557Z] =================================================================================================================== 00:23:44.825 [2024-12-10T21:54:52.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=139051 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 139051 /var/tmp/bdevperf.sock 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 139051 ']' 00:23:44.825 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.826 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.826 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.826 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.826 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:44.826 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.826 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:44.826 22:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:44.826 [2024-12-10 22:54:52.183513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:44.826 22:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:44.826 [2024-12-10 22:54:52.448203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:44.826 22:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:45.392 NVMe0n1 00:23:45.392 22:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:45.650 00:23:45.650 22:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:45.908 00:23:45.908 22:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:45.908 22:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:46.166 22:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:46.734 22:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:50.020 22:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.020 22:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:50.020 22:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=139836 00:23:50.020 22:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.020 22:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 139836 00:23:50.956 { 00:23:50.956 "results": [ 00:23:50.956 { 00:23:50.956 "job": "NVMe0n1", 00:23:50.956 "core_mask": "0x1", 00:23:50.956 "workload": "verify", 00:23:50.956 "status": "finished", 00:23:50.956 "verify_range": { 00:23:50.956 "start": 0, 00:23:50.956 "length": 16384 00:23:50.956 }, 00:23:50.956 "queue_depth": 128, 00:23:50.956 "io_size": 4096, 00:23:50.956 "runtime": 1.008013, 00:23:50.956 "iops": 8572.310079334295, 00:23:50.956 "mibps": 33.48558624739959, 00:23:50.956 "io_failed": 0, 00:23:50.956 "io_timeout": 0, 00:23:50.956 "avg_latency_us": 14866.608326368263, 00:23:50.956 "min_latency_us": 1808.3081481481481, 00:23:50.956 "max_latency_us": 13786.832592592593 00:23:50.956 } 00:23:50.956 ], 00:23:50.956 "core_count": 1 00:23:50.956 } 00:23:50.956 22:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:50.956 [2024-12-10 22:54:51.679645] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:23:50.956 [2024-12-10 22:54:51.679736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139051 ] 00:23:50.956 [2024-12-10 22:54:51.751463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.956 [2024-12-10 22:54:51.810703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.956 [2024-12-10 22:54:54.156636] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:50.956 [2024-12-10 22:54:54.156732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.956 [2024-12-10 22:54:54.156758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.956 [2024-12-10 22:54:54.156775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.956 [2024-12-10 22:54:54.156789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.956 [2024-12-10 22:54:54.156803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.956 [2024-12-10 22:54:54.156817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.956 [2024-12-10 22:54:54.156831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.956 [2024-12-10 22:54:54.156844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.956 [2024-12-10 22:54:54.156858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:50.956 [2024-12-10 22:54:54.156904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:50.956 [2024-12-10 22:54:54.156941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec6230 (9): Bad file descriptor 00:23:50.956 [2024-12-10 22:54:54.248774] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:50.956 Running I/O for 1 seconds... 00:23:50.956 8513.00 IOPS, 33.25 MiB/s 00:23:50.956 Latency(us) 00:23:50.956 [2024-12-10T21:54:58.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.956 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:50.956 Verification LBA range: start 0x0 length 0x4000 00:23:50.956 NVMe0n1 : 1.01 8572.31 33.49 0.00 0.00 14866.61 1808.31 13786.83 00:23:50.956 [2024-12-10T21:54:58.688Z] =================================================================================================================== 00:23:50.956 [2024-12-10T21:54:58.688Z] Total : 8572.31 33.49 0.00 0.00 14866.61 1808.31 13786.83 00:23:50.956 22:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.956 22:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:51.214 22:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:51.472 22:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:51.472 22:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:51.730 22:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:51.989 22:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:55.273 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:55.273 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 139051 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 139051 ']' 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 139051 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139051 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139051' 00:23:55.532 killing process with pid 139051 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 139051 00:23:55.532 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 139051 00:23:55.791 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:55.791 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.049 rmmod nvme_tcp 00:23:56.049 rmmod nvme_fabrics 00:23:56.049 rmmod nvme_keyring 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 136902 ']' 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 136902 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 136902 ']' 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 136902 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136902 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136902' 00:23:56.049 killing process with pid 136902 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 136902 00:23:56.049 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 136902 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.308 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.213 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.213 00:23:58.213 real 0m35.608s 00:23:58.213 user 2m6.601s 00:23:58.213 sys 0m5.570s 00:23:58.213 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.213 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:58.213 ************************************ 00:23:58.213 END TEST nvmf_failover 00:23:58.213 ************************************ 00:23:58.472 22:55:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:58.472 22:55:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:58.472 22:55:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.472 22:55:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.472 ************************************ 00:23:58.472 START TEST nvmf_host_discovery 00:23:58.472 ************************************ 00:23:58.472 22:55:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:58.472 * Looking for test storage... 00:23:58.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.472 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:58.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.473 --rc genhtml_branch_coverage=1 00:23:58.473 --rc genhtml_function_coverage=1 00:23:58.473 --rc genhtml_legend=1 00:23:58.473 --rc geninfo_all_blocks=1 00:23:58.473 --rc geninfo_unexecuted_blocks=1 00:23:58.473 00:23:58.473 ' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:58.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.473 --rc genhtml_branch_coverage=1 00:23:58.473 --rc genhtml_function_coverage=1 00:23:58.473 --rc genhtml_legend=1 00:23:58.473 --rc geninfo_all_blocks=1 00:23:58.473 --rc geninfo_unexecuted_blocks=1 00:23:58.473 00:23:58.473 ' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:58.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.473 --rc genhtml_branch_coverage=1 00:23:58.473 --rc genhtml_function_coverage=1 00:23:58.473 --rc genhtml_legend=1 00:23:58.473 --rc geninfo_all_blocks=1 00:23:58.473 --rc geninfo_unexecuted_blocks=1 00:23:58.473 00:23:58.473 ' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:58.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.473 --rc genhtml_branch_coverage=1 00:23:58.473 --rc genhtml_function_coverage=1 00:23:58.473 --rc genhtml_legend=1 00:23:58.473 --rc geninfo_all_blocks=1 00:23:58.473 --rc geninfo_unexecuted_blocks=1 00:23:58.473 00:23:58.473 ' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.473 22:55:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:01.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:01.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:01.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:01.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:01.007 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:01.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:24:01.008 00:24:01.008 --- 10.0.0.2 ping statistics --- 00:24:01.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.008 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:24:01.008 00:24:01.008 --- 10.0.0.1 ping statistics --- 00:24:01.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.008 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=142453 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 142453 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 142453 ']' 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.008 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.008 [2024-12-10 22:55:08.506723] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:01.008 [2024-12-10 22:55:08.506811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.008 [2024-12-10 22:55:08.577149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.008 [2024-12-10 22:55:08.632807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.008 [2024-12-10 22:55:08.632867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.008 [2024-12-10 22:55:08.632880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.008 [2024-12-10 22:55:08.632905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.008 [2024-12-10 22:55:08.632915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.008 [2024-12-10 22:55:08.633483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 [2024-12-10 22:55:08.813698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 [2024-12-10 22:55:08.821950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 null0 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 null1 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=142591 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 142591 /tmp/host.sock 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 142591 ']' 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:01.266 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.266 22:55:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 [2024-12-10 22:55:08.896140] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:01.266 [2024-12-10 22:55:08.896223] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142591 ] 00:24:01.266 [2024-12-10 22:55:08.962452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.525 [2024-12-10 22:55:09.019413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.525 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 [2024-12-10 22:55:09.419455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.784 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:02.042 22:55:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:02.609 [2024-12-10 22:55:10.231659] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:02.609 [2024-12-10 22:55:10.231702] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:02.609 [2024-12-10 22:55:10.231727] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:02.609 [2024-12-10 22:55:10.319026] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:02.866 [2024-12-10 22:55:10.379747] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:02.866 [2024-12-10 22:55:10.380749] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x131eb60:1 started. 00:24:02.866 [2024-12-10 22:55:10.382563] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:02.866 [2024-12-10 22:55:10.382600] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:02.866 [2024-12-10 22:55:10.389419] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x131eb60 was disconnected and freed. delete nvme_qpair. 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:02.866 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:03.124 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:03.125 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.125 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:03.125 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.125 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:03.125 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.125 22:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:03.383 [2024-12-10 22:55:11.013742] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x131f0a0:1 started. 00:24:03.383 [2024-12-10 22:55:11.021111] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x131f0a0 was disconnected and freed. delete nvme_qpair. 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 [2024-12-10 22:55:11.084440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.383 [2024-12-10 22:55:11.085458] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:03.383 [2024-12-10 22:55:11.085491] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:03.383 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.641 [2024-12-10 22:55:11.212353] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:03.641 22:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:03.641 [2024-12-10 22:55:11.276253] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:03.641 [2024-12-10 22:55:11.276304] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:03.641 [2024-12-10 22:55:11.276319] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:03.641 [2024-12-10 22:55:11.276327] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.580 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.878 [2024-12-10 22:55:12.308333] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:04.878 [2024-12-10 22:55:12.308382] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:04.878 [2024-12-10 22:55:12.308516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.878 [2024-12-10 22:55:12.308573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.878 [2024-12-10 22:55:12.308594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.878 [2024-12-10 22:55:12.308609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.878 [2024-12-10 22:55:12.308630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.878 [2024-12-10 22:55:12.308653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.878 [2024-12-10 22:55:12.308668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.878 [2024-12-10 22:55:12.308681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.878 [2024-12-10 22:55:12.308695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:04.878 [2024-12-10 22:55:12.318514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.878 [2024-12-10 22:55:12.328574] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:04.878 [2024-12-10 22:55:12.328598] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:04.878 [2024-12-10 22:55:12.328614] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:04.878 [2024-12-10 22:55:12.328640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:04.878 [2024-12-10 22:55:12.328676] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:04.878 [2024-12-10 22:55:12.328886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.878 [2024-12-10 22:55:12.328924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef0d0 with addr=10.0.0.2, port=4420 00:24:04.878 [2024-12-10 22:55:12.328941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.878 [2024-12-10 22:55:12.328966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.878 [2024-12-10 22:55:12.328989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:04.878 [2024-12-10 22:55:12.329003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:04.878 [2024-12-10 22:55:12.329021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:04.878 [2024-12-10 22:55:12.329034] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:04.878 [2024-12-10 22:55:12.329050] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:04.878 [2024-12-10 22:55:12.329058] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:04.878 [2024-12-10 22:55:12.338710] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:04.878 [2024-12-10 22:55:12.338731] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:04.878 [2024-12-10 22:55:12.338740] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:04.878 [2024-12-10 22:55:12.338747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:04.878 [2024-12-10 22:55:12.338771] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:04.878 [2024-12-10 22:55:12.338987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.878 [2024-12-10 22:55:12.339015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef0d0 with addr=10.0.0.2, port=4420 00:24:04.878 [2024-12-10 22:55:12.339032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.878 [2024-12-10 22:55:12.339054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.878 [2024-12-10 22:55:12.339087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:04.878 [2024-12-10 22:55:12.339105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:04.878 [2024-12-10 22:55:12.339118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:04.878 [2024-12-10 22:55:12.339129] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:04.878 [2024-12-10 22:55:12.339138] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:04.878 [2024-12-10 22:55:12.339146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:04.878 [2024-12-10 22:55:12.348807] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:04.878 [2024-12-10 22:55:12.348845] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:04.878 [2024-12-10 22:55:12.348855] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:04.878 [2024-12-10 22:55:12.348864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:04.878 [2024-12-10 22:55:12.348904] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:04.878 [2024-12-10 22:55:12.349100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.878 [2024-12-10 22:55:12.349128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef0d0 with addr=10.0.0.2, port=4420 00:24:04.878 [2024-12-10 22:55:12.349144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.878 [2024-12-10 22:55:12.349166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.878 [2024-12-10 22:55:12.349186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:04.878 [2024-12-10 22:55:12.349200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:04.878 [2024-12-10 22:55:12.349213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:04.878 [2024-12-10 22:55:12.349230] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:04.878 [2024-12-10 22:55:12.349240] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:04.878 [2024-12-10 22:55:12.349247] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.878 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:04.878 [2024-12-10 22:55:12.358937] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:04.879 [2024-12-10 22:55:12.358959] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:04.879 [2024-12-10 22:55:12.358968] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.358975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:04.879 [2024-12-10 22:55:12.359012] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.359241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.879 [2024-12-10 22:55:12.359269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef0d0 with addr=10.0.0.2, port=4420 00:24:04.879 [2024-12-10 22:55:12.359286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.879 [2024-12-10 22:55:12.359308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.879 [2024-12-10 22:55:12.360218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:04.879 [2024-12-10 22:55:12.360240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:04.879 [2024-12-10 22:55:12.360253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:04.879 [2024-12-10 22:55:12.360264] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:04.879 [2024-12-10 22:55:12.360287] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:04.879 [2024-12-10 22:55:12.360295] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:04.879 [2024-12-10 22:55:12.369045] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:04.879 [2024-12-10 22:55:12.369066] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:04.879 [2024-12-10 22:55:12.369074] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.369081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:04.879 [2024-12-10 22:55:12.369119] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.369231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.879 [2024-12-10 22:55:12.369273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef0d0 with addr=10.0.0.2, port=4420 00:24:04.879 [2024-12-10 22:55:12.369290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.879 [2024-12-10 22:55:12.369312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.879 [2024-12-10 22:55:12.369346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:04.879 [2024-12-10 22:55:12.369364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:04.879 [2024-12-10 22:55:12.369378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:04.879 [2024-12-10 22:55:12.369390] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:04.879 [2024-12-10 22:55:12.369398] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:04.879 [2024-12-10 22:55:12.369406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:04.879 [2024-12-10 22:55:12.379153] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:04.879 [2024-12-10 22:55:12.379173] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:04.879 [2024-12-10 22:55:12.379181] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.379189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:04.879 [2024-12-10 22:55:12.379226] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.379366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.879 [2024-12-10 22:55:12.379393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef0d0 with addr=10.0.0.2, port=4420 00:24:04.879 [2024-12-10 22:55:12.379409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.879 [2024-12-10 22:55:12.379430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.879 [2024-12-10 22:55:12.379484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:04.879 [2024-12-10 22:55:12.379504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:04.879 [2024-12-10 22:55:12.379517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:04.879 [2024-12-10 22:55:12.379529] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:04.879 [2024-12-10 22:55:12.379538] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:04.879 [2024-12-10 22:55:12.379555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.879 [2024-12-10 22:55:12.389259] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:04.879 [2024-12-10 22:55:12.389277] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:04.879 [2024-12-10 22:55:12.389285] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.389292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:04.879 [2024-12-10 22:55:12.389329] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:04.879 [2024-12-10 22:55:12.389563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.879 [2024-12-10 22:55:12.389591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef0d0 with addr=10.0.0.2, port=4420 00:24:04.879 [2024-12-10 22:55:12.389607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ef0d0 is same with the state(6) to be set 00:24:04.879 [2024-12-10 22:55:12.389630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef0d0 (9): Bad file descriptor 00:24:04.879 [2024-12-10 22:55:12.389670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:04.879 [2024-12-10 22:55:12.389687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:04.879 [2024-12-10 22:55:12.389700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:04.879 [2024-12-10 22:55:12.389712] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:04.879 [2024-12-10 22:55:12.389721] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:04.879 [2024-12-10 22:55:12.389729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:04.879 [2024-12-10 22:55:12.394683] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:04.879 [2024-12-10 22:55:12.394713] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.137 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:05.137 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:05.137 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:05.137 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:05.138 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:05.138 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.138 22:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.070 [2024-12-10 22:55:13.630071] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:06.070 [2024-12-10 22:55:13.630105] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:06.070 [2024-12-10 22:55:13.630128] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:06.070 [2024-12-10 22:55:13.718380] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:06.328 [2024-12-10 22:55:13.824204] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:06.328 [2024-12-10 22:55:13.824961] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1300510:1 started. 00:24:06.328 [2024-12-10 22:55:13.827076] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:06.328 [2024-12-10 22:55:13.827120] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:06.328 [2024-12-10 22:55:13.828712] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1300510 was disconnected and freed. delete nvme_qpair. 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.328 request: 00:24:06.328 { 00:24:06.328 "name": "nvme", 00:24:06.328 "trtype": "tcp", 00:24:06.328 "traddr": "10.0.0.2", 00:24:06.328 "adrfam": "ipv4", 00:24:06.328 "trsvcid": "8009", 00:24:06.328 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:06.328 "wait_for_attach": true, 00:24:06.328 "method": "bdev_nvme_start_discovery", 00:24:06.328 "req_id": 1 00:24:06.328 } 00:24:06.328 Got JSON-RPC error response 00:24:06.328 response: 00:24:06.328 { 00:24:06.328 "code": -17, 00:24:06.328 "message": "File exists" 00:24:06.328 } 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:06.328 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.329 request: 00:24:06.329 { 00:24:06.329 "name": "nvme_second", 00:24:06.329 "trtype": "tcp", 00:24:06.329 "traddr": "10.0.0.2", 00:24:06.329 "adrfam": "ipv4", 00:24:06.329 "trsvcid": "8009", 00:24:06.329 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:06.329 "wait_for_attach": true, 00:24:06.329 "method": "bdev_nvme_start_discovery", 00:24:06.329 "req_id": 1 00:24:06.329 } 00:24:06.329 Got JSON-RPC error response 00:24:06.329 response: 00:24:06.329 { 00:24:06.329 "code": -17, 00:24:06.329 "message": "File exists" 00:24:06.329 } 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.329 22:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.329 22:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.699 [2024-12-10 22:55:15.038508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.699 [2024-12-10 22:55:15.038571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132ab50 with addr=10.0.0.2, port=8010 00:24:07.699 [2024-12-10 22:55:15.038595] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:07.699 [2024-12-10 22:55:15.038610] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:07.699 [2024-12-10 22:55:15.038622] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:08.631 [2024-12-10 22:55:16.040854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.631 [2024-12-10 22:55:16.040888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132ab50 with addr=10.0.0.2, port=8010 00:24:08.631 [2024-12-10 22:55:16.040909] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:08.631 [2024-12-10 22:55:16.040922] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:08.631 [2024-12-10 22:55:16.040933] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:09.564 [2024-12-10 22:55:17.043182] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:09.564 request: 00:24:09.564 { 00:24:09.564 "name": "nvme_second", 00:24:09.564 "trtype": "tcp", 00:24:09.564 "traddr": "10.0.0.2", 00:24:09.564 "adrfam": "ipv4", 00:24:09.564 "trsvcid": "8010", 00:24:09.564 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:09.564 "wait_for_attach": false, 00:24:09.564 "attach_timeout_ms": 3000, 00:24:09.564 "method": "bdev_nvme_start_discovery", 00:24:09.564 "req_id": 1 00:24:09.564 } 00:24:09.564 Got JSON-RPC error response 00:24:09.564 response: 00:24:09.564 { 00:24:09.564 "code": -110, 00:24:09.564 "message": "Connection timed out" 00:24:09.564 } 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 142591 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.564 rmmod nvme_tcp 00:24:09.564 rmmod nvme_fabrics 00:24:09.564 rmmod nvme_keyring 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 142453 ']' 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 142453 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 142453 ']' 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 142453 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142453 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142453' 00:24:09.564 killing process with pid 142453 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 142453 00:24:09.564 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 142453 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.824 22:55:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:12.362 00:24:12.362 real 0m13.497s 00:24:12.362 user 0m19.404s 00:24:12.362 sys 0m2.892s 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.362 ************************************ 00:24:12.362 END TEST nvmf_host_discovery 00:24:12.362 ************************************ 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.362 ************************************ 00:24:12.362 START TEST nvmf_host_multipath_status 00:24:12.362 ************************************ 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:12.362 * Looking for test storage... 00:24:12.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:12.362 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:12.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.363 --rc genhtml_branch_coverage=1 00:24:12.363 --rc genhtml_function_coverage=1 00:24:12.363 --rc genhtml_legend=1 00:24:12.363 --rc geninfo_all_blocks=1 00:24:12.363 --rc geninfo_unexecuted_blocks=1 00:24:12.363 00:24:12.363 ' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:12.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.363 --rc genhtml_branch_coverage=1 00:24:12.363 --rc genhtml_function_coverage=1 00:24:12.363 --rc genhtml_legend=1 00:24:12.363 --rc geninfo_all_blocks=1 00:24:12.363 --rc geninfo_unexecuted_blocks=1 00:24:12.363 00:24:12.363 ' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:12.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.363 --rc genhtml_branch_coverage=1 00:24:12.363 --rc genhtml_function_coverage=1 00:24:12.363 --rc genhtml_legend=1 00:24:12.363 --rc geninfo_all_blocks=1 00:24:12.363 --rc geninfo_unexecuted_blocks=1 00:24:12.363 00:24:12.363 ' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:12.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.363 --rc genhtml_branch_coverage=1 00:24:12.363 --rc genhtml_function_coverage=1 00:24:12.363 --rc genhtml_legend=1 00:24:12.363 --rc geninfo_all_blocks=1 00:24:12.363 --rc geninfo_unexecuted_blocks=1 00:24:12.363 00:24:12.363 ' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.363 22:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:14.269 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:14.269 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.269 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:14.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:14.270 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.270 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.528 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.528 22:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:24:14.528 00:24:14.528 --- 10.0.0.2 ping statistics --- 00:24:14.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.528 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:14.528 00:24:14.528 --- 10.0.0.1 ping statistics --- 00:24:14.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.528 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.528 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=145644 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 145644 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 145644 ']' 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.529 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.529 [2024-12-10 22:55:22.088235] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:14.529 [2024-12-10 22:55:22.088320] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.529 [2024-12-10 22:55:22.160384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:14.529 [2024-12-10 22:55:22.216785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.529 [2024-12-10 22:55:22.216863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.529 [2024-12-10 22:55:22.216879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.529 [2024-12-10 22:55:22.216891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.529 [2024-12-10 22:55:22.216920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.529 [2024-12-10 22:55:22.218393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.529 [2024-12-10 22:55:22.218399] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=145644 00:24:14.787 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:15.045 [2024-12-10 22:55:22.620500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.045 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:15.303 Malloc0 00:24:15.303 22:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:15.562 22:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.819 22:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.077 [2024-12-10 22:55:23.720451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.077 22:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:16.334 [2024-12-10 22:55:23.989151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=145928 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 145928 /var/tmp/bdevperf.sock 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 145928 ']' 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.334 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:16.591 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.591 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:16.591 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:16.848 22:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:17.411 Nvme0n1 00:24:17.411 22:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:17.668 Nvme0n1 00:24:17.668 22:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:17.668 22:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:20.197 22:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:20.197 22:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:20.197 22:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.454 22:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:21.388 22:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:21.388 22:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.388 22:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.388 22:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.645 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.646 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.646 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.646 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.904 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.904 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.904 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.904 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.162 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.162 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.162 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.162 22:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.420 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.420 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.420 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.420 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.678 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.678 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.678 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.678 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.935 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.935 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:22.935 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:23.194 22:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:23.760 22:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:24.694 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:24.694 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:24.694 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.694 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.952 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.952 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:24.952 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.952 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.210 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.210 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.210 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.210 22:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.467 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.467 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.467 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.467 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.725 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.725 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.725 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.725 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.982 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.982 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.982 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.982 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.240 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.240 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:26.240 22:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:26.498 22:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:26.759 22:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:27.742 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:27.742 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.742 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.742 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.999 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.999 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:27.999 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.999 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:28.257 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.257 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:28.257 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.257 22:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:28.515 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.516 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:28.516 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.516 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.082 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.600 22:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:29.600 22:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:29.859 22:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:30.119 22:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:31.054 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:31.054 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:31.054 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.054 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:31.312 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.312 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:31.312 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.312 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:31.571 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.571 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:31.571 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.571 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.829 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.829 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.829 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.829 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.087 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.087 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.087 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.087 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.344 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.344 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:32.344 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.344 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:32.602 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.602 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:32.602 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:32.861 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:33.430 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:34.370 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:34.370 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:34.370 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.370 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.628 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.628 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:34.628 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.628 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.886 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.886 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.886 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.886 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.144 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.144 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.144 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.144 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.402 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.402 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:35.402 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.402 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.660 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.660 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:35.660 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.660 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.918 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.918 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:35.918 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:36.176 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:36.436 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:37.374 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:37.374 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:37.374 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.374 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.631 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.631 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:37.631 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.631 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.889 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.889 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.889 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.889 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.147 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.147 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.147 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.147 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.714 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.973 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.973 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:39.232 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:39.232 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:39.799 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:40.057 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:40.991 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:40.991 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.991 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.991 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.250 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.250 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:41.250 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.250 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.508 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.508 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.508 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.508 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.766 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.766 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.766 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.766 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.024 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.024 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.024 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.024 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.282 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.282 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:42.282 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.282 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.850 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.850 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:42.850 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:42.850 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:43.418 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:44.352 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:44.352 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.352 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.352 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.610 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.610 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:44.610 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.610 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.869 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.869 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.869 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.869 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:45.127 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.127 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:45.127 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.127 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:45.385 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.385 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:45.385 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.385 22:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.643 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.643 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:45.643 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.643 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.901 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.901 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:45.901 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:46.178 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:46.480 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:47.416 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:47.416 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:47.416 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.416 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.674 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.674 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:47.674 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.674 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.932 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.932 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.932 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.932 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:48.501 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.501 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.501 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.501 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.501 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.501 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:48.501 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.501 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:48.759 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.759 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:48.759 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.759 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.329 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.329 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:49.329 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:49.329 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:49.895 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:50.833 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:50.833 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:50.833 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.833 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.092 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.092 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:51.092 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.092 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.350 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.350 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.350 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.350 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.624 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.624 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.624 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.624 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.882 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.882 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:51.882 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.882 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.140 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.140 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:52.140 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.140 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 145928 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 145928 ']' 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 145928 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145928 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145928' 00:24:52.400 killing process with pid 145928 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 145928 00:24:52.400 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 145928 00:24:52.400 { 00:24:52.400 "results": [ 00:24:52.400 { 00:24:52.400 "job": "Nvme0n1", 00:24:52.400 "core_mask": "0x4", 00:24:52.400 "workload": "verify", 00:24:52.400 "status": "terminated", 00:24:52.400 "verify_range": { 00:24:52.400 "start": 0, 00:24:52.400 "length": 16384 00:24:52.400 }, 00:24:52.400 "queue_depth": 128, 00:24:52.400 "io_size": 4096, 00:24:52.400 "runtime": 34.487594, 00:24:52.400 "iops": 7732.606687494639, 00:24:52.400 "mibps": 30.205494873025934, 00:24:52.400 "io_failed": 0, 00:24:52.400 "io_timeout": 0, 00:24:52.400 "avg_latency_us": 16517.792202155095, 00:24:52.400 "min_latency_us": 164.59851851851852, 00:24:52.400 "max_latency_us": 4026531.84 00:24:52.400 } 00:24:52.400 ], 00:24:52.400 "core_count": 1 00:24:52.400 } 00:24:52.661 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 145928 00:24:52.661 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:52.661 [2024-12-10 22:55:24.051406] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:52.661 [2024-12-10 22:55:24.051488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145928 ] 00:24:52.661 [2024-12-10 22:55:24.121363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.661 [2024-12-10 22:55:24.181308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.661 Running I/O for 90 seconds... 00:24:52.661 8245.00 IOPS, 32.21 MiB/s [2024-12-10T21:56:00.393Z] 8307.50 IOPS, 32.45 MiB/s [2024-12-10T21:56:00.393Z] 8257.33 IOPS, 32.26 MiB/s [2024-12-10T21:56:00.394Z] 8290.50 IOPS, 32.38 MiB/s [2024-12-10T21:56:00.394Z] 8257.80 IOPS, 32.26 MiB/s [2024-12-10T21:56:00.394Z] 8283.17 IOPS, 32.36 MiB/s [2024-12-10T21:56:00.394Z] 8238.57 IOPS, 32.18 MiB/s [2024-12-10T21:56:00.394Z] 8210.12 IOPS, 32.07 MiB/s [2024-12-10T21:56:00.394Z] 8195.56 IOPS, 32.01 MiB/s [2024-12-10T21:56:00.394Z] 8217.10 IOPS, 32.10 MiB/s [2024-12-10T21:56:00.394Z] 8210.00 IOPS, 32.07 MiB/s [2024-12-10T21:56:00.394Z] 8218.25 IOPS, 32.10 MiB/s [2024-12-10T21:56:00.394Z] 8223.54 IOPS, 32.12 MiB/s [2024-12-10T21:56:00.394Z] 8232.29 IOPS, 32.16 MiB/s [2024-12-10T21:56:00.394Z] 8234.60 IOPS, 32.17 MiB/s [2024-12-10T21:56:00.394Z] [2024-12-10 22:55:40.569679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.569744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.569836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.569866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.569904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.569932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.569970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.569996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.570061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.570127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.570188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.570250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.570308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.570384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.570457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.570516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.570603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.570669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.570708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.571719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.662 [2024-12-10 22:55:40.571751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.571793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.571876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.571903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.571939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.571965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.572001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.572027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.572063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.572104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.572141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.572174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.572215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.572243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.572852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.572885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.572930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.572960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:52.662 [2024-12-10 22:55:40.573754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.662 [2024-12-10 22:55:40.573781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.573820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.573847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.573886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.573912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.573951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.573978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.574944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.574984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.575958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.575985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.576024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.576051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.576090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.576121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.576162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.576189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.576374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.663 [2024-12-10 22:55:40.576403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.576452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.576480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.576524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.576559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:52.663 [2024-12-10 22:55:40.576605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.663 [2024-12-10 22:55:40.576634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.576679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.576707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.576751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.576779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.576822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.576850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.576893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.576920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.576976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.577940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.577984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:40.578938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:40.578967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:52.664 7735.31 IOPS, 30.22 MiB/s [2024-12-10T21:56:00.396Z] 7280.29 IOPS, 28.44 MiB/s [2024-12-10T21:56:00.396Z] 6875.83 IOPS, 26.86 MiB/s [2024-12-10T21:56:00.396Z] 6513.95 IOPS, 25.45 MiB/s [2024-12-10T21:56:00.396Z] 6590.55 IOPS, 25.74 MiB/s [2024-12-10T21:56:00.396Z] 6659.24 IOPS, 26.01 MiB/s [2024-12-10T21:56:00.396Z] 6758.73 IOPS, 26.40 MiB/s [2024-12-10T21:56:00.396Z] 6945.04 IOPS, 27.13 MiB/s [2024-12-10T21:56:00.396Z] 7106.38 IOPS, 27.76 MiB/s [2024-12-10T21:56:00.396Z] 7239.36 IOPS, 28.28 MiB/s [2024-12-10T21:56:00.396Z] 7275.96 IOPS, 28.42 MiB/s [2024-12-10T21:56:00.396Z] 7311.67 IOPS, 28.56 MiB/s [2024-12-10T21:56:00.396Z] 7334.50 IOPS, 28.65 MiB/s [2024-12-10T21:56:00.396Z] 7408.24 IOPS, 28.94 MiB/s [2024-12-10T21:56:00.396Z] 7515.40 IOPS, 29.36 MiB/s [2024-12-10T21:56:00.396Z] 7622.77 IOPS, 29.78 MiB/s [2024-12-10T21:56:00.396Z] [2024-12-10 22:55:57.322191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:57.322274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:57.322377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:57.322404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:57.322440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:57.322465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:57.322499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:57.322525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:57.322581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.664 [2024-12-10 22:55:57.322617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:52.664 [2024-12-10 22:55:57.322653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.322679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.322748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.322775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.322813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.322839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.322897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.322923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.322969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.322996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.323047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.323077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.323112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.323139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.323175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.323201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.323238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.323279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.323315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.323355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.323388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.323413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.323448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.323473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.327668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.327700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.327745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.327773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.327813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.327840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.327902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.327929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.327968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.327995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.328402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.328478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.328541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.328645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.665 [2024-12-10 22:55:57.328710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:52.665 [2024-12-10 22:55:57.328903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.665 [2024-12-10 22:55:57.328930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:52.665 7690.94 IOPS, 30.04 MiB/s [2024-12-10T21:56:00.397Z] 7712.12 IOPS, 30.13 MiB/s [2024-12-10T21:56:00.397Z] 7729.15 IOPS, 30.19 MiB/s [2024-12-10T21:56:00.397Z] Received shutdown signal, test time was about 34.488461 seconds 00:24:52.665 00:24:52.665 Latency(us) 00:24:52.665 [2024-12-10T21:56:00.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.665 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:52.665 Verification LBA range: start 0x0 length 0x4000 00:24:52.665 Nvme0n1 : 34.49 7732.61 30.21 0.00 0.00 16517.79 164.60 4026531.84 00:24:52.665 [2024-12-10T21:56:00.397Z] =================================================================================================================== 00:24:52.665 [2024-12-10T21:56:00.397Z] Total : 7732.61 30.21 0.00 0.00 16517.79 164.60 4026531.84 00:24:52.665 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.924 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.924 rmmod nvme_tcp 00:24:52.924 rmmod nvme_fabrics 00:24:52.924 rmmod nvme_keyring 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 145644 ']' 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 145644 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 145644 ']' 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 145644 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145644 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145644' 00:24:53.182 killing process with pid 145644 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 145644 00:24:53.182 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 145644 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.445 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.353 22:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.353 00:24:55.353 real 0m43.437s 00:24:55.353 user 2m11.254s 00:24:55.353 sys 0m11.100s 00:24:55.353 22:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:55.353 22:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:55.353 ************************************ 00:24:55.353 END TEST nvmf_host_multipath_status 00:24:55.353 ************************************ 00:24:55.353 22:56:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:55.353 22:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:55.353 22:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:55.353 22:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.353 ************************************ 00:24:55.353 START TEST nvmf_discovery_remove_ifc 00:24:55.353 ************************************ 00:24:55.353 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:55.611 * Looking for test storage... 00:24:55.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.611 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.612 --rc genhtml_branch_coverage=1 00:24:55.612 --rc genhtml_function_coverage=1 00:24:55.612 --rc genhtml_legend=1 00:24:55.612 --rc geninfo_all_blocks=1 00:24:55.612 --rc geninfo_unexecuted_blocks=1 00:24:55.612 00:24:55.612 ' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.612 --rc genhtml_branch_coverage=1 00:24:55.612 --rc genhtml_function_coverage=1 00:24:55.612 --rc genhtml_legend=1 00:24:55.612 --rc geninfo_all_blocks=1 00:24:55.612 --rc geninfo_unexecuted_blocks=1 00:24:55.612 00:24:55.612 ' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.612 --rc genhtml_branch_coverage=1 00:24:55.612 --rc genhtml_function_coverage=1 00:24:55.612 --rc genhtml_legend=1 00:24:55.612 --rc geninfo_all_blocks=1 00:24:55.612 --rc geninfo_unexecuted_blocks=1 00:24:55.612 00:24:55.612 ' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.612 --rc genhtml_branch_coverage=1 00:24:55.612 --rc genhtml_function_coverage=1 00:24:55.612 --rc genhtml_legend=1 00:24:55.612 --rc geninfo_all_blocks=1 00:24:55.612 --rc geninfo_unexecuted_blocks=1 00:24:55.612 00:24:55.612 ' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:55.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:55.612 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:55.613 22:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:58.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:58.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:58.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:58.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:24:58.157 00:24:58.157 --- 10.0.0.2 ping statistics --- 00:24:58.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.157 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:58.157 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:24:58.157 00:24:58.157 --- 10.0.0.1 ping statistics --- 00:24:58.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.157 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=152396 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 152396 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 152396 ']' 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.158 [2024-12-10 22:56:05.595479] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:58.158 [2024-12-10 22:56:05.595574] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.158 [2024-12-10 22:56:05.664707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.158 [2024-12-10 22:56:05.717289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.158 [2024-12-10 22:56:05.717365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.158 [2024-12-10 22:56:05.717379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.158 [2024-12-10 22:56:05.717390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.158 [2024-12-10 22:56:05.717399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.158 [2024-12-10 22:56:05.718052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.158 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.416 [2024-12-10 22:56:05.910222] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.416 [2024-12-10 22:56:05.918438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:58.416 null0 00:24:58.416 [2024-12-10 22:56:05.950358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=152427 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 152427 /tmp/host.sock 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 152427 ']' 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:58.416 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.416 22:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.416 [2024-12-10 22:56:06.022768] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:58.416 [2024-12-10 22:56:06.022858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152427 ] 00:24:58.416 [2024-12-10 22:56:06.090873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.675 [2024-12-10 22:56:06.150905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.675 22:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 [2024-12-10 22:56:07.387456] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:00.048 [2024-12-10 22:56:07.387494] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:00.048 [2024-12-10 22:56:07.387539] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:00.048 [2024-12-10 22:56:07.515974] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:00.048 [2024-12-10 22:56:07.697065] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:00.048 [2024-12-10 22:56:07.698113] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bfd590:1 started. 00:25:00.048 [2024-12-10 22:56:07.699735] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:00.048 [2024-12-10 22:56:07.699792] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:00.048 [2024-12-10 22:56:07.699837] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:00.048 [2024-12-10 22:56:07.699859] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:00.048 [2024-12-10 22:56:07.699908] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.049 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.049 [2024-12-10 22:56:07.706515] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bfd590 was disconnected and freed. delete nvme_qpair. 00:25:00.049 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.049 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:00.049 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:00.049 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:00.306 22:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:01.239 22:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.612 22:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.612 22:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:02.612 22:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:03.544 22:56:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.478 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.478 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:04.479 22:56:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:05.412 22:56:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:05.412 [2024-12-10 22:56:13.141196] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:05.412 [2024-12-10 22:56:13.141279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.412 [2024-12-10 22:56:13.141302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.412 [2024-12-10 22:56:13.141321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.412 [2024-12-10 22:56:13.141334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.412 [2024-12-10 22:56:13.141347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.412 [2024-12-10 22:56:13.141360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.412 [2024-12-10 22:56:13.141374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.412 [2024-12-10 22:56:13.141387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.412 [2024-12-10 22:56:13.141400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.412 [2024-12-10 22:56:13.141412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.412 [2024-12-10 22:56:13.141425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9e10 is same with the state(6) to be set 00:25:05.670 [2024-12-10 22:56:13.151211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd9e10 (9): Bad file descriptor 00:25:05.670 [2024-12-10 22:56:13.161251] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:05.670 [2024-12-10 22:56:13.161272] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:05.670 [2024-12-10 22:56:13.161285] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:05.670 [2024-12-10 22:56:13.161301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:05.670 [2024-12-10 22:56:13.161359] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.602 [2024-12-10 22:56:14.168563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:06.602 [2024-12-10 22:56:14.168616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd9e10 with addr=10.0.0.2, port=4420 00:25:06.602 [2024-12-10 22:56:14.168637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9e10 is same with the state(6) to be set 00:25:06.602 [2024-12-10 22:56:14.168672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd9e10 (9): Bad file descriptor 00:25:06.602 [2024-12-10 22:56:14.169104] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:06.602 [2024-12-10 22:56:14.169142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.602 [2024-12-10 22:56:14.169159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.602 [2024-12-10 22:56:14.169175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.602 [2024-12-10 22:56:14.169188] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.602 [2024-12-10 22:56:14.169198] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.602 [2024-12-10 22:56:14.169205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.602 [2024-12-10 22:56:14.169218] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.602 [2024-12-10 22:56:14.169226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:06.602 22:56:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:07.541 [2024-12-10 22:56:15.171715] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:07.541 [2024-12-10 22:56:15.171752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:07.541 [2024-12-10 22:56:15.171777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:07.541 [2024-12-10 22:56:15.171793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:07.541 [2024-12-10 22:56:15.171808] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:07.541 [2024-12-10 22:56:15.171820] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:07.541 [2024-12-10 22:56:15.171830] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:07.541 [2024-12-10 22:56:15.171838] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:07.541 [2024-12-10 22:56:15.171890] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:07.541 [2024-12-10 22:56:15.171955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.541 [2024-12-10 22:56:15.171978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.541 [2024-12-10 22:56:15.172000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.541 [2024-12-10 22:56:15.172013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.541 [2024-12-10 22:56:15.172026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.541 [2024-12-10 22:56:15.172039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.541 [2024-12-10 22:56:15.172053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.541 [2024-12-10 22:56:15.172065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.541 [2024-12-10 22:56:15.172080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.541 [2024-12-10 22:56:15.172094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.541 [2024-12-10 22:56:15.172107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:07.541 [2024-12-10 22:56:15.172158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc9560 (9): Bad file descriptor 00:25:07.541 [2024-12-10 22:56:15.173154] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:07.541 [2024-12-10 22:56:15.173175] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.541 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.801 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.801 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:07.801 22:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:08.769 22:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:09.703 [2024-12-10 22:56:17.224156] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.703 [2024-12-10 22:56:17.224192] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.703 [2024-12-10 22:56:17.224215] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.703 [2024-12-10 22:56:17.351625] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:09.704 22:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:09.962 [2024-12-10 22:56:17.535794] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:09.962 [2024-12-10 22:56:17.536758] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bb3460:1 started. 00:25:09.962 [2024-12-10 22:56:17.538193] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:09.962 [2024-12-10 22:56:17.538239] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:09.962 [2024-12-10 22:56:17.538282] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:09.962 [2024-12-10 22:56:17.538305] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:09.962 [2024-12-10 22:56:17.538317] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:09.962 [2024-12-10 22:56:17.542724] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bb3460 was disconnected and freed. delete nvme_qpair. 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 152427 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 152427 ']' 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 152427 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152427 00:25:10.897 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.898 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.898 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152427' 00:25:10.898 killing process with pid 152427 00:25:10.898 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 152427 00:25:10.898 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 152427 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.156 rmmod nvme_tcp 00:25:11.156 rmmod nvme_fabrics 00:25:11.156 rmmod nvme_keyring 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 152396 ']' 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 152396 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 152396 ']' 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 152396 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152396 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152396' 00:25:11.156 killing process with pid 152396 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 152396 00:25:11.156 22:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 152396 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.414 22:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.954 00:25:13.954 real 0m18.049s 00:25:13.954 user 0m26.098s 00:25:13.954 sys 0m3.137s 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.954 ************************************ 00:25:13.954 END TEST nvmf_discovery_remove_ifc 00:25:13.954 ************************************ 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.954 ************************************ 00:25:13.954 START TEST nvmf_identify_kernel_target 00:25:13.954 ************************************ 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:13.954 * Looking for test storage... 00:25:13.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.954 --rc genhtml_branch_coverage=1 00:25:13.954 --rc genhtml_function_coverage=1 00:25:13.954 --rc genhtml_legend=1 00:25:13.954 --rc geninfo_all_blocks=1 00:25:13.954 --rc geninfo_unexecuted_blocks=1 00:25:13.954 00:25:13.954 ' 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.954 --rc genhtml_branch_coverage=1 00:25:13.954 --rc genhtml_function_coverage=1 00:25:13.954 --rc genhtml_legend=1 00:25:13.954 --rc geninfo_all_blocks=1 00:25:13.954 --rc geninfo_unexecuted_blocks=1 00:25:13.954 00:25:13.954 ' 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.954 --rc genhtml_branch_coverage=1 00:25:13.954 --rc genhtml_function_coverage=1 00:25:13.954 --rc genhtml_legend=1 00:25:13.954 --rc geninfo_all_blocks=1 00:25:13.954 --rc geninfo_unexecuted_blocks=1 00:25:13.954 00:25:13.954 ' 00:25:13.954 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.954 --rc genhtml_branch_coverage=1 00:25:13.954 --rc genhtml_function_coverage=1 00:25:13.954 --rc genhtml_legend=1 00:25:13.954 --rc geninfo_all_blocks=1 00:25:13.954 --rc geninfo_unexecuted_blocks=1 00:25:13.954 00:25:13.954 ' 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.955 22:56:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:15.860 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:15.860 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.860 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:15.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:15.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:25:15.861 00:25:15.861 --- 10.0.0.2 ping statistics --- 00:25:15.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.861 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:15.861 00:25:15.861 --- 10.0.0.1 ping statistics --- 00:25:15.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.861 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:15.861 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:16.121 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:16.121 22:56:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:17.059 Waiting for block devices as requested 00:25:17.059 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:17.318 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:17.318 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:17.575 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:17.575 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:17.575 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:17.575 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:17.835 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:17.835 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:17.835 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:17.835 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:18.095 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:18.095 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:18.095 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:18.354 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:18.354 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:18.354 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:18.613 No valid GPT data, bailing 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:25:18.613 00:25:18.613 Discovery Log Number of Records 2, Generation counter 2 00:25:18.613 =====Discovery Log Entry 0====== 00:25:18.613 trtype: tcp 00:25:18.613 adrfam: ipv4 00:25:18.613 subtype: current discovery subsystem 00:25:18.613 treq: not specified, sq flow control disable supported 00:25:18.613 portid: 1 00:25:18.613 trsvcid: 4420 00:25:18.613 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:18.613 traddr: 10.0.0.1 00:25:18.613 eflags: none 00:25:18.613 sectype: none 00:25:18.613 =====Discovery Log Entry 1====== 00:25:18.613 trtype: tcp 00:25:18.613 adrfam: ipv4 00:25:18.613 subtype: nvme subsystem 00:25:18.613 treq: not specified, sq flow control disable supported 00:25:18.613 portid: 1 00:25:18.613 trsvcid: 4420 00:25:18.613 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:18.613 traddr: 10.0.0.1 00:25:18.613 eflags: none 00:25:18.613 sectype: none 00:25:18.613 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:18.613 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:18.873 ===================================================== 00:25:18.873 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:18.873 ===================================================== 00:25:18.873 Controller Capabilities/Features 00:25:18.873 ================================ 00:25:18.873 Vendor ID: 0000 00:25:18.873 Subsystem Vendor ID: 0000 00:25:18.873 Serial Number: cacd05908323dfbb0396 00:25:18.873 Model Number: Linux 00:25:18.873 Firmware Version: 6.8.9-20 00:25:18.873 Recommended Arb Burst: 0 00:25:18.873 IEEE OUI Identifier: 00 00 00 00:25:18.873 Multi-path I/O 00:25:18.873 May have multiple subsystem ports: No 00:25:18.873 May have multiple controllers: No 00:25:18.873 Associated with SR-IOV VF: No 00:25:18.873 Max Data Transfer Size: Unlimited 00:25:18.873 Max Number of Namespaces: 0 00:25:18.873 Max Number of I/O Queues: 1024 00:25:18.873 NVMe Specification Version (VS): 1.3 00:25:18.873 NVMe Specification Version (Identify): 1.3 00:25:18.873 Maximum Queue Entries: 1024 00:25:18.873 Contiguous Queues Required: No 00:25:18.873 Arbitration Mechanisms Supported 00:25:18.873 Weighted Round Robin: Not Supported 00:25:18.873 Vendor Specific: Not Supported 00:25:18.873 Reset Timeout: 7500 ms 00:25:18.873 Doorbell Stride: 4 bytes 00:25:18.873 NVM Subsystem Reset: Not Supported 00:25:18.873 Command Sets Supported 00:25:18.873 NVM Command Set: Supported 00:25:18.873 Boot Partition: Not Supported 00:25:18.873 Memory Page Size Minimum: 4096 bytes 00:25:18.873 Memory Page Size Maximum: 4096 bytes 00:25:18.873 Persistent Memory Region: Not Supported 00:25:18.873 Optional Asynchronous Events Supported 00:25:18.873 Namespace Attribute Notices: Not Supported 00:25:18.873 Firmware Activation Notices: Not Supported 00:25:18.873 ANA Change Notices: Not Supported 00:25:18.873 PLE Aggregate Log Change Notices: Not Supported 00:25:18.873 LBA Status Info Alert Notices: Not Supported 00:25:18.873 EGE Aggregate Log Change Notices: Not Supported 00:25:18.873 Normal NVM Subsystem Shutdown event: Not Supported 00:25:18.873 Zone Descriptor Change Notices: Not Supported 00:25:18.874 Discovery Log Change Notices: Supported 00:25:18.874 Controller Attributes 00:25:18.874 128-bit Host Identifier: Not Supported 00:25:18.874 Non-Operational Permissive Mode: Not Supported 00:25:18.874 NVM Sets: Not Supported 00:25:18.874 Read Recovery Levels: Not Supported 00:25:18.874 Endurance Groups: Not Supported 00:25:18.874 Predictable Latency Mode: Not Supported 00:25:18.874 Traffic Based Keep ALive: Not Supported 00:25:18.874 Namespace Granularity: Not Supported 00:25:18.874 SQ Associations: Not Supported 00:25:18.874 UUID List: Not Supported 00:25:18.874 Multi-Domain Subsystem: Not Supported 00:25:18.874 Fixed Capacity Management: Not Supported 00:25:18.874 Variable Capacity Management: Not Supported 00:25:18.874 Delete Endurance Group: Not Supported 00:25:18.874 Delete NVM Set: Not Supported 00:25:18.874 Extended LBA Formats Supported: Not Supported 00:25:18.874 Flexible Data Placement Supported: Not Supported 00:25:18.874 00:25:18.874 Controller Memory Buffer Support 00:25:18.874 ================================ 00:25:18.874 Supported: No 00:25:18.874 00:25:18.874 Persistent Memory Region Support 00:25:18.874 ================================ 00:25:18.874 Supported: No 00:25:18.874 00:25:18.874 Admin Command Set Attributes 00:25:18.874 ============================ 00:25:18.874 Security Send/Receive: Not Supported 00:25:18.874 Format NVM: Not Supported 00:25:18.874 Firmware Activate/Download: Not Supported 00:25:18.874 Namespace Management: Not Supported 00:25:18.874 Device Self-Test: Not Supported 00:25:18.874 Directives: Not Supported 00:25:18.874 NVMe-MI: Not Supported 00:25:18.874 Virtualization Management: Not Supported 00:25:18.874 Doorbell Buffer Config: Not Supported 00:25:18.874 Get LBA Status Capability: Not Supported 00:25:18.874 Command & Feature Lockdown Capability: Not Supported 00:25:18.874 Abort Command Limit: 1 00:25:18.874 Async Event Request Limit: 1 00:25:18.874 Number of Firmware Slots: N/A 00:25:18.874 Firmware Slot 1 Read-Only: N/A 00:25:18.874 Firmware Activation Without Reset: N/A 00:25:18.874 Multiple Update Detection Support: N/A 00:25:18.874 Firmware Update Granularity: No Information Provided 00:25:18.874 Per-Namespace SMART Log: No 00:25:18.874 Asymmetric Namespace Access Log Page: Not Supported 00:25:18.874 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:18.874 Command Effects Log Page: Not Supported 00:25:18.874 Get Log Page Extended Data: Supported 00:25:18.874 Telemetry Log Pages: Not Supported 00:25:18.874 Persistent Event Log Pages: Not Supported 00:25:18.874 Supported Log Pages Log Page: May Support 00:25:18.874 Commands Supported & Effects Log Page: Not Supported 00:25:18.874 Feature Identifiers & Effects Log Page:May Support 00:25:18.874 NVMe-MI Commands & Effects Log Page: May Support 00:25:18.874 Data Area 4 for Telemetry Log: Not Supported 00:25:18.874 Error Log Page Entries Supported: 1 00:25:18.874 Keep Alive: Not Supported 00:25:18.874 00:25:18.874 NVM Command Set Attributes 00:25:18.874 ========================== 00:25:18.874 Submission Queue Entry Size 00:25:18.874 Max: 1 00:25:18.874 Min: 1 00:25:18.874 Completion Queue Entry Size 00:25:18.874 Max: 1 00:25:18.874 Min: 1 00:25:18.874 Number of Namespaces: 0 00:25:18.874 Compare Command: Not Supported 00:25:18.874 Write Uncorrectable Command: Not Supported 00:25:18.874 Dataset Management Command: Not Supported 00:25:18.874 Write Zeroes Command: Not Supported 00:25:18.874 Set Features Save Field: Not Supported 00:25:18.874 Reservations: Not Supported 00:25:18.874 Timestamp: Not Supported 00:25:18.874 Copy: Not Supported 00:25:18.874 Volatile Write Cache: Not Present 00:25:18.874 Atomic Write Unit (Normal): 1 00:25:18.874 Atomic Write Unit (PFail): 1 00:25:18.874 Atomic Compare & Write Unit: 1 00:25:18.874 Fused Compare & Write: Not Supported 00:25:18.874 Scatter-Gather List 00:25:18.874 SGL Command Set: Supported 00:25:18.874 SGL Keyed: Not Supported 00:25:18.874 SGL Bit Bucket Descriptor: Not Supported 00:25:18.874 SGL Metadata Pointer: Not Supported 00:25:18.874 Oversized SGL: Not Supported 00:25:18.874 SGL Metadata Address: Not Supported 00:25:18.874 SGL Offset: Supported 00:25:18.874 Transport SGL Data Block: Not Supported 00:25:18.874 Replay Protected Memory Block: Not Supported 00:25:18.874 00:25:18.874 Firmware Slot Information 00:25:18.874 ========================= 00:25:18.874 Active slot: 0 00:25:18.874 00:25:18.874 00:25:18.874 Error Log 00:25:18.874 ========= 00:25:18.874 00:25:18.874 Active Namespaces 00:25:18.874 ================= 00:25:18.874 Discovery Log Page 00:25:18.874 ================== 00:25:18.874 Generation Counter: 2 00:25:18.874 Number of Records: 2 00:25:18.874 Record Format: 0 00:25:18.874 00:25:18.874 Discovery Log Entry 0 00:25:18.874 ---------------------- 00:25:18.874 Transport Type: 3 (TCP) 00:25:18.874 Address Family: 1 (IPv4) 00:25:18.874 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:18.874 Entry Flags: 00:25:18.874 Duplicate Returned Information: 0 00:25:18.874 Explicit Persistent Connection Support for Discovery: 0 00:25:18.874 Transport Requirements: 00:25:18.874 Secure Channel: Not Specified 00:25:18.874 Port ID: 1 (0x0001) 00:25:18.874 Controller ID: 65535 (0xffff) 00:25:18.874 Admin Max SQ Size: 32 00:25:18.874 Transport Service Identifier: 4420 00:25:18.874 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:18.874 Transport Address: 10.0.0.1 00:25:18.874 Discovery Log Entry 1 00:25:18.874 ---------------------- 00:25:18.874 Transport Type: 3 (TCP) 00:25:18.874 Address Family: 1 (IPv4) 00:25:18.874 Subsystem Type: 2 (NVM Subsystem) 00:25:18.874 Entry Flags: 00:25:18.874 Duplicate Returned Information: 0 00:25:18.874 Explicit Persistent Connection Support for Discovery: 0 00:25:18.874 Transport Requirements: 00:25:18.874 Secure Channel: Not Specified 00:25:18.874 Port ID: 1 (0x0001) 00:25:18.874 Controller ID: 65535 (0xffff) 00:25:18.874 Admin Max SQ Size: 32 00:25:18.874 Transport Service Identifier: 4420 00:25:18.874 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:18.874 Transport Address: 10.0.0.1 00:25:18.874 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:18.874 get_feature(0x01) failed 00:25:18.874 get_feature(0x02) failed 00:25:18.874 get_feature(0x04) failed 00:25:18.874 ===================================================== 00:25:18.874 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:18.874 ===================================================== 00:25:18.874 Controller Capabilities/Features 00:25:18.874 ================================ 00:25:18.874 Vendor ID: 0000 00:25:18.874 Subsystem Vendor ID: 0000 00:25:18.874 Serial Number: c14232e24762ab859b2a 00:25:18.874 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:18.874 Firmware Version: 6.8.9-20 00:25:18.874 Recommended Arb Burst: 6 00:25:18.874 IEEE OUI Identifier: 00 00 00 00:25:18.874 Multi-path I/O 00:25:18.874 May have multiple subsystem ports: Yes 00:25:18.874 May have multiple controllers: Yes 00:25:18.874 Associated with SR-IOV VF: No 00:25:18.874 Max Data Transfer Size: Unlimited 00:25:18.874 Max Number of Namespaces: 1024 00:25:18.874 Max Number of I/O Queues: 128 00:25:18.874 NVMe Specification Version (VS): 1.3 00:25:18.874 NVMe Specification Version (Identify): 1.3 00:25:18.874 Maximum Queue Entries: 1024 00:25:18.874 Contiguous Queues Required: No 00:25:18.874 Arbitration Mechanisms Supported 00:25:18.874 Weighted Round Robin: Not Supported 00:25:18.874 Vendor Specific: Not Supported 00:25:18.874 Reset Timeout: 7500 ms 00:25:18.874 Doorbell Stride: 4 bytes 00:25:18.874 NVM Subsystem Reset: Not Supported 00:25:18.874 Command Sets Supported 00:25:18.874 NVM Command Set: Supported 00:25:18.874 Boot Partition: Not Supported 00:25:18.874 Memory Page Size Minimum: 4096 bytes 00:25:18.874 Memory Page Size Maximum: 4096 bytes 00:25:18.874 Persistent Memory Region: Not Supported 00:25:18.874 Optional Asynchronous Events Supported 00:25:18.874 Namespace Attribute Notices: Supported 00:25:18.874 Firmware Activation Notices: Not Supported 00:25:18.874 ANA Change Notices: Supported 00:25:18.874 PLE Aggregate Log Change Notices: Not Supported 00:25:18.874 LBA Status Info Alert Notices: Not Supported 00:25:18.874 EGE Aggregate Log Change Notices: Not Supported 00:25:18.874 Normal NVM Subsystem Shutdown event: Not Supported 00:25:18.874 Zone Descriptor Change Notices: Not Supported 00:25:18.874 Discovery Log Change Notices: Not Supported 00:25:18.874 Controller Attributes 00:25:18.874 128-bit Host Identifier: Supported 00:25:18.874 Non-Operational Permissive Mode: Not Supported 00:25:18.874 NVM Sets: Not Supported 00:25:18.874 Read Recovery Levels: Not Supported 00:25:18.874 Endurance Groups: Not Supported 00:25:18.874 Predictable Latency Mode: Not Supported 00:25:18.874 Traffic Based Keep ALive: Supported 00:25:18.874 Namespace Granularity: Not Supported 00:25:18.874 SQ Associations: Not Supported 00:25:18.874 UUID List: Not Supported 00:25:18.874 Multi-Domain Subsystem: Not Supported 00:25:18.874 Fixed Capacity Management: Not Supported 00:25:18.874 Variable Capacity Management: Not Supported 00:25:18.874 Delete Endurance Group: Not Supported 00:25:18.874 Delete NVM Set: Not Supported 00:25:18.874 Extended LBA Formats Supported: Not Supported 00:25:18.874 Flexible Data Placement Supported: Not Supported 00:25:18.874 00:25:18.874 Controller Memory Buffer Support 00:25:18.874 ================================ 00:25:18.874 Supported: No 00:25:18.874 00:25:18.874 Persistent Memory Region Support 00:25:18.874 ================================ 00:25:18.874 Supported: No 00:25:18.874 00:25:18.874 Admin Command Set Attributes 00:25:18.874 ============================ 00:25:18.874 Security Send/Receive: Not Supported 00:25:18.874 Format NVM: Not Supported 00:25:18.874 Firmware Activate/Download: Not Supported 00:25:18.874 Namespace Management: Not Supported 00:25:18.874 Device Self-Test: Not Supported 00:25:18.874 Directives: Not Supported 00:25:18.874 NVMe-MI: Not Supported 00:25:18.874 Virtualization Management: Not Supported 00:25:18.874 Doorbell Buffer Config: Not Supported 00:25:18.874 Get LBA Status Capability: Not Supported 00:25:18.874 Command & Feature Lockdown Capability: Not Supported 00:25:18.874 Abort Command Limit: 4 00:25:18.874 Async Event Request Limit: 4 00:25:18.874 Number of Firmware Slots: N/A 00:25:18.874 Firmware Slot 1 Read-Only: N/A 00:25:18.874 Firmware Activation Without Reset: N/A 00:25:18.874 Multiple Update Detection Support: N/A 00:25:18.874 Firmware Update Granularity: No Information Provided 00:25:18.874 Per-Namespace SMART Log: Yes 00:25:18.874 Asymmetric Namespace Access Log Page: Supported 00:25:18.874 ANA Transition Time : 10 sec 00:25:18.874 00:25:18.874 Asymmetric Namespace Access Capabilities 00:25:18.874 ANA Optimized State : Supported 00:25:18.874 ANA Non-Optimized State : Supported 00:25:18.874 ANA Inaccessible State : Supported 00:25:18.874 ANA Persistent Loss State : Supported 00:25:18.874 ANA Change State : Supported 00:25:18.874 ANAGRPID is not changed : No 00:25:18.874 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:18.874 00:25:18.874 ANA Group Identifier Maximum : 128 00:25:18.874 Number of ANA Group Identifiers : 128 00:25:18.875 Max Number of Allowed Namespaces : 1024 00:25:18.875 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:18.875 Command Effects Log Page: Supported 00:25:18.875 Get Log Page Extended Data: Supported 00:25:18.875 Telemetry Log Pages: Not Supported 00:25:18.875 Persistent Event Log Pages: Not Supported 00:25:18.875 Supported Log Pages Log Page: May Support 00:25:18.875 Commands Supported & Effects Log Page: Not Supported 00:25:18.875 Feature Identifiers & Effects Log Page:May Support 00:25:18.875 NVMe-MI Commands & Effects Log Page: May Support 00:25:18.875 Data Area 4 for Telemetry Log: Not Supported 00:25:18.875 Error Log Page Entries Supported: 128 00:25:18.875 Keep Alive: Supported 00:25:18.875 Keep Alive Granularity: 1000 ms 00:25:18.875 00:25:18.875 NVM Command Set Attributes 00:25:18.875 ========================== 00:25:18.875 Submission Queue Entry Size 00:25:18.875 Max: 64 00:25:18.875 Min: 64 00:25:18.875 Completion Queue Entry Size 00:25:18.875 Max: 16 00:25:18.875 Min: 16 00:25:18.875 Number of Namespaces: 1024 00:25:18.875 Compare Command: Not Supported 00:25:18.875 Write Uncorrectable Command: Not Supported 00:25:18.875 Dataset Management Command: Supported 00:25:18.875 Write Zeroes Command: Supported 00:25:18.875 Set Features Save Field: Not Supported 00:25:18.875 Reservations: Not Supported 00:25:18.875 Timestamp: Not Supported 00:25:18.875 Copy: Not Supported 00:25:18.875 Volatile Write Cache: Present 00:25:18.875 Atomic Write Unit (Normal): 1 00:25:18.875 Atomic Write Unit (PFail): 1 00:25:18.875 Atomic Compare & Write Unit: 1 00:25:18.875 Fused Compare & Write: Not Supported 00:25:18.875 Scatter-Gather List 00:25:18.875 SGL Command Set: Supported 00:25:18.875 SGL Keyed: Not Supported 00:25:18.875 SGL Bit Bucket Descriptor: Not Supported 00:25:18.875 SGL Metadata Pointer: Not Supported 00:25:18.875 Oversized SGL: Not Supported 00:25:18.875 SGL Metadata Address: Not Supported 00:25:18.875 SGL Offset: Supported 00:25:18.875 Transport SGL Data Block: Not Supported 00:25:18.875 Replay Protected Memory Block: Not Supported 00:25:18.875 00:25:18.875 Firmware Slot Information 00:25:18.875 ========================= 00:25:18.875 Active slot: 0 00:25:18.875 00:25:18.875 Asymmetric Namespace Access 00:25:18.875 =========================== 00:25:18.875 Change Count : 0 00:25:18.875 Number of ANA Group Descriptors : 1 00:25:18.875 ANA Group Descriptor : 0 00:25:18.875 ANA Group ID : 1 00:25:18.875 Number of NSID Values : 1 00:25:18.875 Change Count : 0 00:25:18.875 ANA State : 1 00:25:18.875 Namespace Identifier : 1 00:25:18.875 00:25:18.875 Commands Supported and Effects 00:25:18.875 ============================== 00:25:18.875 Admin Commands 00:25:18.875 -------------- 00:25:18.875 Get Log Page (02h): Supported 00:25:18.875 Identify (06h): Supported 00:25:18.875 Abort (08h): Supported 00:25:18.875 Set Features (09h): Supported 00:25:18.875 Get Features (0Ah): Supported 00:25:18.875 Asynchronous Event Request (0Ch): Supported 00:25:18.875 Keep Alive (18h): Supported 00:25:18.875 I/O Commands 00:25:18.875 ------------ 00:25:18.875 Flush (00h): Supported 00:25:18.875 Write (01h): Supported LBA-Change 00:25:18.875 Read (02h): Supported 00:25:18.875 Write Zeroes (08h): Supported LBA-Change 00:25:18.875 Dataset Management (09h): Supported 00:25:18.875 00:25:18.875 Error Log 00:25:18.875 ========= 00:25:18.875 Entry: 0 00:25:18.875 Error Count: 0x3 00:25:18.875 Submission Queue Id: 0x0 00:25:18.875 Command Id: 0x5 00:25:18.875 Phase Bit: 0 00:25:18.875 Status Code: 0x2 00:25:18.875 Status Code Type: 0x0 00:25:18.875 Do Not Retry: 1 00:25:18.875 Error Location: 0x28 00:25:18.875 LBA: 0x0 00:25:18.875 Namespace: 0x0 00:25:18.875 Vendor Log Page: 0x0 00:25:18.875 ----------- 00:25:18.875 Entry: 1 00:25:18.875 Error Count: 0x2 00:25:18.875 Submission Queue Id: 0x0 00:25:18.875 Command Id: 0x5 00:25:18.875 Phase Bit: 0 00:25:18.875 Status Code: 0x2 00:25:18.875 Status Code Type: 0x0 00:25:18.875 Do Not Retry: 1 00:25:18.875 Error Location: 0x28 00:25:18.875 LBA: 0x0 00:25:18.875 Namespace: 0x0 00:25:18.875 Vendor Log Page: 0x0 00:25:18.875 ----------- 00:25:18.875 Entry: 2 00:25:18.875 Error Count: 0x1 00:25:18.875 Submission Queue Id: 0x0 00:25:18.875 Command Id: 0x4 00:25:18.875 Phase Bit: 0 00:25:18.875 Status Code: 0x2 00:25:18.875 Status Code Type: 0x0 00:25:18.875 Do Not Retry: 1 00:25:18.875 Error Location: 0x28 00:25:18.875 LBA: 0x0 00:25:18.875 Namespace: 0x0 00:25:18.875 Vendor Log Page: 0x0 00:25:18.875 00:25:18.875 Number of Queues 00:25:18.875 ================ 00:25:18.875 Number of I/O Submission Queues: 128 00:25:18.875 Number of I/O Completion Queues: 128 00:25:18.875 00:25:18.875 ZNS Specific Controller Data 00:25:18.875 ============================ 00:25:18.875 Zone Append Size Limit: 0 00:25:18.875 00:25:18.875 00:25:18.875 Active Namespaces 00:25:18.875 ================= 00:25:18.875 get_feature(0x05) failed 00:25:18.875 Namespace ID:1 00:25:18.875 Command Set Identifier: NVM (00h) 00:25:18.875 Deallocate: Supported 00:25:18.875 Deallocated/Unwritten Error: Not Supported 00:25:18.875 Deallocated Read Value: Unknown 00:25:18.875 Deallocate in Write Zeroes: Not Supported 00:25:18.875 Deallocated Guard Field: 0xFFFF 00:25:18.875 Flush: Supported 00:25:18.875 Reservation: Not Supported 00:25:18.875 Namespace Sharing Capabilities: Multiple Controllers 00:25:18.875 Size (in LBAs): 1953525168 (931GiB) 00:25:18.875 Capacity (in LBAs): 1953525168 (931GiB) 00:25:18.875 Utilization (in LBAs): 1953525168 (931GiB) 00:25:18.875 UUID: 3bb62c25-ece8-4215-9e73-54956cd11824 00:25:18.875 Thin Provisioning: Not Supported 00:25:18.875 Per-NS Atomic Units: Yes 00:25:18.875 Atomic Boundary Size (Normal): 0 00:25:18.875 Atomic Boundary Size (PFail): 0 00:25:18.875 Atomic Boundary Offset: 0 00:25:18.875 NGUID/EUI64 Never Reused: No 00:25:18.875 ANA group ID: 1 00:25:18.875 Namespace Write Protected: No 00:25:18.875 Number of LBA Formats: 1 00:25:18.875 Current LBA Format: LBA Format #00 00:25:18.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:18.875 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.875 rmmod nvme_tcp 00:25:18.875 rmmod nvme_fabrics 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:18.875 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.876 22:56:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:21.416 22:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:22.355 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:22.355 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:22.355 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:22.355 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:22.355 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:22.355 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:22.355 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:22.355 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:22.355 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:23.295 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:23.295 00:25:23.295 real 0m9.869s 00:25:23.295 user 0m2.158s 00:25:23.295 sys 0m3.728s 00:25:23.295 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.295 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.295 ************************************ 00:25:23.295 END TEST nvmf_identify_kernel_target 00:25:23.295 ************************************ 00:25:23.295 22:56:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:23.295 22:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.295 22:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.295 22:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.554 ************************************ 00:25:23.554 START TEST nvmf_auth_host 00:25:23.554 ************************************ 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:23.554 * Looking for test storage... 00:25:23.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:23.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.554 --rc genhtml_branch_coverage=1 00:25:23.554 --rc genhtml_function_coverage=1 00:25:23.554 --rc genhtml_legend=1 00:25:23.554 --rc geninfo_all_blocks=1 00:25:23.554 --rc geninfo_unexecuted_blocks=1 00:25:23.554 00:25:23.554 ' 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:23.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.554 --rc genhtml_branch_coverage=1 00:25:23.554 --rc genhtml_function_coverage=1 00:25:23.554 --rc genhtml_legend=1 00:25:23.554 --rc geninfo_all_blocks=1 00:25:23.554 --rc geninfo_unexecuted_blocks=1 00:25:23.554 00:25:23.554 ' 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:23.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.554 --rc genhtml_branch_coverage=1 00:25:23.554 --rc genhtml_function_coverage=1 00:25:23.554 --rc genhtml_legend=1 00:25:23.554 --rc geninfo_all_blocks=1 00:25:23.554 --rc geninfo_unexecuted_blocks=1 00:25:23.554 00:25:23.554 ' 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:23.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.554 --rc genhtml_branch_coverage=1 00:25:23.554 --rc genhtml_function_coverage=1 00:25:23.554 --rc genhtml_legend=1 00:25:23.554 --rc geninfo_all_blocks=1 00:25:23.554 --rc geninfo_unexecuted_blocks=1 00:25:23.554 00:25:23.554 ' 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.554 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.555 22:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.090 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:26.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:26.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:26.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:26.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:26.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:25:26.091 00:25:26.091 --- 10.0.0.2 ping statistics --- 00:25:26.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.091 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:25:26.091 00:25:26.091 --- 10.0.0.1 ping statistics --- 00:25:26.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.091 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.091 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=159639 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 159639 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 159639 ']' 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.092 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=35e33463c37072e973a91449e0efa000 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:26.350 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5nD 00:25:26.351 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 35e33463c37072e973a91449e0efa000 0 00:25:26.351 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 35e33463c37072e973a91449e0efa000 0 00:25:26.351 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.351 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.351 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=35e33463c37072e973a91449e0efa000 00:25:26.351 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:26.351 22:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5nD 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5nD 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5nD 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9abe932fdd00a80f185c9ca6215aa98622a18ff1e6100b0c8761e291a2eede33 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.P3t 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9abe932fdd00a80f185c9ca6215aa98622a18ff1e6100b0c8761e291a2eede33 3 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9abe932fdd00a80f185c9ca6215aa98622a18ff1e6100b0c8761e291a2eede33 3 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9abe932fdd00a80f185c9ca6215aa98622a18ff1e6100b0c8761e291a2eede33 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.P3t 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.P3t 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.P3t 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c475a31a3c07f7e9db74f26f687c6e84fc97ce9b84aec71f 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Utp 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c475a31a3c07f7e9db74f26f687c6e84fc97ce9b84aec71f 0 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c475a31a3c07f7e9db74f26f687c6e84fc97ce9b84aec71f 0 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c475a31a3c07f7e9db74f26f687c6e84fc97ce9b84aec71f 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:26.351 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Utp 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Utp 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Utp 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f92ffd7b997e1cd3d140e27c24119f973466bfcae6b706c9 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.azt 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f92ffd7b997e1cd3d140e27c24119f973466bfcae6b706c9 2 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f92ffd7b997e1cd3d140e27c24119f973466bfcae6b706c9 2 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f92ffd7b997e1cd3d140e27c24119f973466bfcae6b706c9 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.azt 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.azt 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.azt 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b599b0f55d31ac1e8c4c7145c0a77aab 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.I9e 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b599b0f55d31ac1e8c4c7145c0a77aab 1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b599b0f55d31ac1e8c4c7145c0a77aab 1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b599b0f55d31ac1e8c4c7145c0a77aab 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.I9e 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.I9e 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.I9e 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=11b73e00afb9d010ccaf56ccba611a22 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wbW 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 11b73e00afb9d010ccaf56ccba611a22 1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 11b73e00afb9d010ccaf56ccba611a22 1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=11b73e00afb9d010ccaf56ccba611a22 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wbW 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wbW 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wbW 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bfe44dbeb25f13241fffca193a377d78937e1a6dc92a54f0 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xZ9 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bfe44dbeb25f13241fffca193a377d78937e1a6dc92a54f0 2 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bfe44dbeb25f13241fffca193a377d78937e1a6dc92a54f0 2 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bfe44dbeb25f13241fffca193a377d78937e1a6dc92a54f0 00:25:26.610 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xZ9 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xZ9 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xZ9 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=96875ca421ca4151f5d00cfe675b076a 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.heG 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 96875ca421ca4151f5d00cfe675b076a 0 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 96875ca421ca4151f5d00cfe675b076a 0 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=96875ca421ca4151f5d00cfe675b076a 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:26.611 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.heG 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.heG 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.heG 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e840b0c47ca822b31cfb2a80867dbf46e0db562984ff3d3e169e765061a50e8 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iBu 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e840b0c47ca822b31cfb2a80867dbf46e0db562984ff3d3e169e765061a50e8 3 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e840b0c47ca822b31cfb2a80867dbf46e0db562984ff3d3e169e765061a50e8 3 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e840b0c47ca822b31cfb2a80867dbf46e0db562984ff3d3e169e765061a50e8 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iBu 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iBu 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.iBu 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 159639 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 159639 ']' 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.869 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5nD 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.P3t ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P3t 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Utp 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.azt ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.azt 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.I9e 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wbW ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wbW 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xZ9 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.heG ]] 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.heG 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.129 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.iBu 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:27.130 22:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:28.506 Waiting for block devices as requested 00:25:28.506 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:28.506 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:28.764 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:28.764 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:28.764 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:28.764 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:29.024 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:29.024 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:29.024 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:29.024 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:29.283 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:29.283 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:29.283 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:29.283 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:29.541 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:29.541 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:29.541 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:30.108 No valid GPT data, bailing 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:30.108 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:25:30.109 00:25:30.109 Discovery Log Number of Records 2, Generation counter 2 00:25:30.109 =====Discovery Log Entry 0====== 00:25:30.109 trtype: tcp 00:25:30.109 adrfam: ipv4 00:25:30.109 subtype: current discovery subsystem 00:25:30.109 treq: not specified, sq flow control disable supported 00:25:30.109 portid: 1 00:25:30.109 trsvcid: 4420 00:25:30.109 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:30.109 traddr: 10.0.0.1 00:25:30.109 eflags: none 00:25:30.109 sectype: none 00:25:30.109 =====Discovery Log Entry 1====== 00:25:30.109 trtype: tcp 00:25:30.109 adrfam: ipv4 00:25:30.109 subtype: nvme subsystem 00:25:30.109 treq: not specified, sq flow control disable supported 00:25:30.109 portid: 1 00:25:30.109 trsvcid: 4420 00:25:30.109 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:30.109 traddr: 10.0.0.1 00:25:30.109 eflags: none 00:25:30.109 sectype: none 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.109 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.367 nvme0n1 00:25:30.367 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.367 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.367 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.367 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.367 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.367 22:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.367 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.368 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.628 nvme0n1 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.628 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.913 nvme0n1 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.913 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.914 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 nvme0n1 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.174 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.175 nvme0n1 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.175 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.435 22:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.435 nvme0n1 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.435 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.436 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.696 nvme0n1 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.696 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.697 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.957 nvme0n1 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.957 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.215 nvme0n1 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.215 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.216 22:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.476 nvme0n1 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.476 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.735 nvme0n1 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:32.735 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.736 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.306 nvme0n1 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.306 22:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.566 nvme0n1 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.566 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.824 nvme0n1 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.824 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.084 nvme0n1 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.084 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.344 22:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.603 nvme0n1 00:25:34.603 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.603 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.603 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.604 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.174 nvme0n1 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.174 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.175 22:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.742 nvme0n1 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.742 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.743 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.312 nvme0n1 00:25:36.312 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.312 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.312 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.312 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.312 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.312 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.313 22:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.573 nvme0n1 00:25:36.573 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.831 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.832 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 nvme0n1 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.403 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.404 22:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.338 nvme0n1 00:25:38.338 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.338 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.338 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.338 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.338 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.338 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.338 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.339 22:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.280 nvme0n1 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.280 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.281 22:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.217 nvme0n1 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.217 22:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.786 nvme0n1 00:25:40.786 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.786 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.786 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.786 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.786 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.786 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.046 22:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.988 nvme0n1 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.988 nvme0n1 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.988 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.989 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.250 nvme0n1 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.250 22:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.510 nvme0n1 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.511 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.771 nvme0n1 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.771 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.772 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.030 nvme0n1 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.030 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.290 nvme0n1 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.290 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.291 22:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.551 nvme0n1 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.551 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.552 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.811 nvme0n1 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.811 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.072 nvme0n1 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.072 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 nvme0n1 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.333 22:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.594 nvme0n1 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.594 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.853 nvme0n1 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.853 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.112 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.370 nvme0n1 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:45.370 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.371 22:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.631 nvme0n1 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.631 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.632 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.892 nvme0n1 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.892 22:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.460 nvme0n1 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.460 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 nvme0n1 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.030 22:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.598 nvme0n1 00:25:47.598 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.598 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.598 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.599 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.168 nvme0n1 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.168 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.169 22:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.738 nvme0n1 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.738 22:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.682 nvme0n1 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.682 22:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.646 nvme0n1 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.646 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.647 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.647 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.647 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.647 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.647 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.647 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.647 22:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.582 nvme0n1 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.582 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.583 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.521 nvme0n1 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.521 22:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.521 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.458 nvme0n1 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.458 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.458 nvme0n1 00:25:53.458 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.459 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.717 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.718 nvme0n1 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.718 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.981 nvme0n1 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.981 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.242 nvme0n1 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.242 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.502 nvme0n1 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.502 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.762 nvme0n1 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.762 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.763 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.023 nvme0n1 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.023 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.282 nvme0n1 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.282 22:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.542 nvme0n1 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.542 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.802 nvme0n1 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.802 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.803 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.063 nvme0n1 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.063 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.321 nvme0n1 00:25:56.321 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.321 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.321 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.321 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.321 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.321 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.580 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.838 nvme0n1 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.838 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.839 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.099 nvme0n1 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.099 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.359 nvme0n1 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.359 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.618 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.619 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.877 nvme0n1 00:25:57.877 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.877 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.877 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.877 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.877 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.877 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.135 22:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.700 nvme0n1 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.700 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.701 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.270 nvme0n1 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.270 22:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.838 nvme0n1 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.838 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.839 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.405 nvme0n1 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzVlMzM0NjNjMzcwNzJlOTczYTkxNDQ5ZTBlZmEwMDAB9755: 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFiZTkzMmZkZDAwYTgwZjE4NWM5Y2E2MjE1YWE5ODYyMmExOGZmMWU2MTAwYjBjODc2MWUyOTFhMmVlZGUzM3j84WE=: 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.405 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.340 nvme0n1 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.340 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.341 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.341 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.341 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.278 nvme0n1 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.278 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.279 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.279 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.279 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.279 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.215 nvme0n1 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNDRkYmViMjVmMTMyNDFmZmZjYTE5M2EzNzdkNzg5MzdlMWE2ZGM5MmE1NGYwWcD4Zg==: 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4NzVjYTQyMWNhNDE1MWY1ZDAwY2ZlNjc1YjA3NmF8NaQD: 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.215 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.155 nvme0n1 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWU4NDBiMGM0N2NhODIyYjMxY2ZiMmE4MDg2N2RiZjQ2ZTBkYjU2Mjk4NGZmM2QzZTE2OWU3NjUwNjFhNTBlOM2Y0zs=: 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.155 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.724 nvme0n1 00:26:04.724 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.983 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.983 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.983 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.983 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.984 request: 00:26:04.984 { 00:26:04.984 "name": "nvme0", 00:26:04.984 "trtype": "tcp", 00:26:04.984 "traddr": "10.0.0.1", 00:26:04.984 "adrfam": "ipv4", 00:26:04.984 "trsvcid": "4420", 00:26:04.984 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:04.984 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:04.984 "prchk_reftag": false, 00:26:04.984 "prchk_guard": false, 00:26:04.984 "hdgst": false, 00:26:04.984 "ddgst": false, 00:26:04.984 "allow_unrecognized_csi": false, 00:26:04.984 "method": "bdev_nvme_attach_controller", 00:26:04.984 "req_id": 1 00:26:04.984 } 00:26:04.984 Got JSON-RPC error response 00:26:04.984 response: 00:26:04.984 { 00:26:04.984 "code": -5, 00:26:04.984 "message": "Input/output error" 00:26:04.984 } 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.984 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.245 request: 00:26:05.245 { 00:26:05.245 "name": "nvme0", 00:26:05.245 "trtype": "tcp", 00:26:05.245 "traddr": "10.0.0.1", 00:26:05.245 "adrfam": "ipv4", 00:26:05.245 "trsvcid": "4420", 00:26:05.245 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:05.245 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:05.245 "prchk_reftag": false, 00:26:05.245 "prchk_guard": false, 00:26:05.245 "hdgst": false, 00:26:05.245 "ddgst": false, 00:26:05.245 "dhchap_key": "key2", 00:26:05.245 "allow_unrecognized_csi": false, 00:26:05.245 "method": "bdev_nvme_attach_controller", 00:26:05.245 "req_id": 1 00:26:05.245 } 00:26:05.245 Got JSON-RPC error response 00:26:05.245 response: 00:26:05.245 { 00:26:05.245 "code": -5, 00:26:05.245 "message": "Input/output error" 00:26:05.245 } 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.245 request: 00:26:05.245 { 00:26:05.245 "name": "nvme0", 00:26:05.245 "trtype": "tcp", 00:26:05.245 "traddr": "10.0.0.1", 00:26:05.245 "adrfam": "ipv4", 00:26:05.245 "trsvcid": "4420", 00:26:05.245 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:05.245 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:05.245 "prchk_reftag": false, 00:26:05.245 "prchk_guard": false, 00:26:05.245 "hdgst": false, 00:26:05.245 "ddgst": false, 00:26:05.245 "dhchap_key": "key1", 00:26:05.245 "dhchap_ctrlr_key": "ckey2", 00:26:05.245 "allow_unrecognized_csi": false, 00:26:05.245 "method": "bdev_nvme_attach_controller", 00:26:05.245 "req_id": 1 00:26:05.245 } 00:26:05.245 Got JSON-RPC error response 00:26:05.245 response: 00:26:05.245 { 00:26:05.245 "code": -5, 00:26:05.245 "message": "Input/output error" 00:26:05.245 } 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.245 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.505 nvme0n1 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.505 request: 00:26:05.505 { 00:26:05.505 "name": "nvme0", 00:26:05.505 "dhchap_key": "key1", 00:26:05.505 "dhchap_ctrlr_key": "ckey2", 00:26:05.505 "method": "bdev_nvme_set_keys", 00:26:05.505 "req_id": 1 00:26:05.505 } 00:26:05.505 Got JSON-RPC error response 00:26:05.505 response: 00:26:05.505 { 00:26:05.505 "code": -13, 00:26:05.505 "message": "Permission denied" 00:26:05.505 } 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:05.505 22:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:06.883 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.883 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:06.883 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.883 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.883 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.883 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:06.883 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ3NWEzMWEzYzA3ZjdlOWRiNzRmMjZmNjg3YzZlODRmYzk3Y2U5Yjg0YWVjNzFmwbJlTQ==: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjkyZmZkN2I5OTdlMWNkM2QxNDBlMjdjMjQxMTlmOTczNDY2YmZjYWU2YjcwNmM5YbdS9w==: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.822 nvme0n1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjU5OWIwZjU1ZDMxYWMxZThjNGM3MTQ1YzBhNzdhYWI6Xhjv: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTFiNzNlMDBhZmI5ZDAxMGNjYWY1NmNjYmE2MTFhMjKKTbeK: 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.822 request: 00:26:07.822 { 00:26:07.822 "name": "nvme0", 00:26:07.822 "dhchap_key": "key2", 00:26:07.822 "dhchap_ctrlr_key": "ckey1", 00:26:07.822 "method": "bdev_nvme_set_keys", 00:26:07.822 "req_id": 1 00:26:07.822 } 00:26:07.822 Got JSON-RPC error response 00:26:07.822 response: 00:26:07.822 { 00:26:07.822 "code": -13, 00:26:07.822 "message": "Permission denied" 00:26:07.822 } 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:07.822 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:07.823 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:07.823 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:07.823 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:07.823 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.823 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:07.823 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.823 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.082 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.082 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:08.082 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.021 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.022 rmmod nvme_tcp 00:26:09.022 rmmod nvme_fabrics 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 159639 ']' 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 159639 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 159639 ']' 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 159639 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159639 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159639' 00:26:09.022 killing process with pid 159639 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 159639 00:26:09.022 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 159639 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.280 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.214 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.214 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:11.472 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:11.472 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:12.850 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:12.850 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:12.850 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:12.850 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:12.850 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:12.850 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:12.850 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:12.850 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:12.850 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:13.790 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:13.790 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5nD /tmp/spdk.key-null.Utp /tmp/spdk.key-sha256.I9e /tmp/spdk.key-sha384.xZ9 /tmp/spdk.key-sha512.iBu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:13.790 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:15.168 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:15.168 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:15.168 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:15.168 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:15.168 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:15.168 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:15.168 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:15.168 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:15.168 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:15.168 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:15.168 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:15.168 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:15.168 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:15.168 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:15.168 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:15.168 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:15.168 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:15.168 00:26:15.168 real 0m51.765s 00:26:15.168 user 0m49.422s 00:26:15.168 sys 0m6.431s 00:26:15.168 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.169 ************************************ 00:26:15.169 END TEST nvmf_auth_host 00:26:15.169 ************************************ 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.169 ************************************ 00:26:15.169 START TEST nvmf_digest 00:26:15.169 ************************************ 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:15.169 * Looking for test storage... 00:26:15.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:15.169 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:15.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.428 --rc genhtml_branch_coverage=1 00:26:15.428 --rc genhtml_function_coverage=1 00:26:15.428 --rc genhtml_legend=1 00:26:15.428 --rc geninfo_all_blocks=1 00:26:15.428 --rc geninfo_unexecuted_blocks=1 00:26:15.428 00:26:15.428 ' 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:15.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.428 --rc genhtml_branch_coverage=1 00:26:15.428 --rc genhtml_function_coverage=1 00:26:15.428 --rc genhtml_legend=1 00:26:15.428 --rc geninfo_all_blocks=1 00:26:15.428 --rc geninfo_unexecuted_blocks=1 00:26:15.428 00:26:15.428 ' 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:15.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.428 --rc genhtml_branch_coverage=1 00:26:15.428 --rc genhtml_function_coverage=1 00:26:15.428 --rc genhtml_legend=1 00:26:15.428 --rc geninfo_all_blocks=1 00:26:15.428 --rc geninfo_unexecuted_blocks=1 00:26:15.428 00:26:15.428 ' 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:15.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.428 --rc genhtml_branch_coverage=1 00:26:15.428 --rc genhtml_function_coverage=1 00:26:15.428 --rc genhtml_legend=1 00:26:15.428 --rc geninfo_all_blocks=1 00:26:15.428 --rc geninfo_unexecuted_blocks=1 00:26:15.428 00:26:15.428 ' 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:15.428 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.429 22:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.429 22:57:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.429 22:57:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.429 22:57:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.429 22:57:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:17.329 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.329 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.329 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.329 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.329 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:17.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:17.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:17.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:17.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.586 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:17.587 00:26:17.587 --- 10.0.0.2 ping statistics --- 00:26:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.587 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:26:17.587 00:26:17.587 --- 10.0.0.1 ping statistics --- 00:26:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.587 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:17.587 ************************************ 00:26:17.587 START TEST nvmf_digest_clean 00:26:17.587 ************************************ 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=169992 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 169992 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 169992 ']' 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.587 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.587 [2024-12-10 22:57:25.299830] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:17.587 [2024-12-10 22:57:25.299924] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.845 [2024-12-10 22:57:25.377963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.845 [2024-12-10 22:57:25.434326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.845 [2024-12-10 22:57:25.434404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.845 [2024-12-10 22:57:25.434418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.845 [2024-12-10 22:57:25.434428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.845 [2024-12-10 22:57:25.434437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.845 [2024-12-10 22:57:25.435069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.845 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.104 null0 00:26:18.104 [2024-12-10 22:57:25.675058] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.104 [2024-12-10 22:57:25.699296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=170017 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 170017 /var/tmp/bperf.sock 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 170017 ']' 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.104 22:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.104 [2024-12-10 22:57:25.747221] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:18.104 [2024-12-10 22:57:25.747283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170017 ] 00:26:18.104 [2024-12-10 22:57:25.812348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.363 [2024-12-10 22:57:25.868969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.363 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.363 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:18.363 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.363 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.363 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:18.932 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.932 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.190 nvme0n1 00:26:19.190 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.190 22:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.449 Running I/O for 2 seconds... 00:26:21.325 19680.00 IOPS, 76.88 MiB/s [2024-12-10T21:57:29.057Z] 19528.00 IOPS, 76.28 MiB/s 00:26:21.325 Latency(us) 00:26:21.325 [2024-12-10T21:57:29.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.325 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:21.325 nvme0n1 : 2.00 19545.28 76.35 0.00 0.00 6540.68 3325.35 13883.92 00:26:21.325 [2024-12-10T21:57:29.057Z] =================================================================================================================== 00:26:21.325 [2024-12-10T21:57:29.057Z] Total : 19545.28 76.35 0.00 0.00 6540.68 3325.35 13883.92 00:26:21.325 { 00:26:21.325 "results": [ 00:26:21.325 { 00:26:21.325 "job": "nvme0n1", 00:26:21.325 "core_mask": "0x2", 00:26:21.325 "workload": "randread", 00:26:21.325 "status": "finished", 00:26:21.325 "queue_depth": 128, 00:26:21.325 "io_size": 4096, 00:26:21.325 "runtime": 2.004781, 00:26:21.325 "iops": 19545.277015294938, 00:26:21.325 "mibps": 76.34873834099585, 00:26:21.325 "io_failed": 0, 00:26:21.325 "io_timeout": 0, 00:26:21.325 "avg_latency_us": 6540.681955749134, 00:26:21.325 "min_latency_us": 3325.345185185185, 00:26:21.325 "max_latency_us": 13883.922962962963 00:26:21.325 } 00:26:21.325 ], 00:26:21.325 "core_count": 1 00:26:21.325 } 00:26:21.325 22:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.325 22:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.325 22:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.325 22:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.325 | select(.opcode=="crc32c") 00:26:21.325 | "\(.module_name) \(.executed)"' 00:26:21.325 22:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 170017 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 170017 ']' 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 170017 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170017 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170017' 00:26:21.583 killing process with pid 170017 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 170017 00:26:21.583 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.583 00:26:21.583 Latency(us) 00:26:21.583 [2024-12-10T21:57:29.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.583 [2024-12-10T21:57:29.315Z] =================================================================================================================== 00:26:21.583 [2024-12-10T21:57:29.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.583 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 170017 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=170542 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 170542 /var/tmp/bperf.sock 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 170542 ']' 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.842 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.842 [2024-12-10 22:57:29.565786] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:21.842 [2024-12-10 22:57:29.565867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170542 ] 00:26:21.842 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.842 Zero copy mechanism will not be used. 00:26:22.100 [2024-12-10 22:57:29.633709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.100 [2024-12-10 22:57:29.687750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.100 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.100 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:22.100 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:22.100 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:22.100 22:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:22.672 22:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.672 22:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.931 nvme0n1 00:26:22.931 22:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:22.931 22:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.189 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.189 Zero copy mechanism will not be used. 00:26:23.189 Running I/O for 2 seconds... 00:26:25.068 5140.00 IOPS, 642.50 MiB/s [2024-12-10T21:57:32.800Z] 5149.00 IOPS, 643.62 MiB/s 00:26:25.068 Latency(us) 00:26:25.068 [2024-12-10T21:57:32.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.068 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:25.068 nvme0n1 : 2.00 5148.60 643.57 0.00 0.00 3103.19 801.00 5097.24 00:26:25.068 [2024-12-10T21:57:32.800Z] =================================================================================================================== 00:26:25.068 [2024-12-10T21:57:32.800Z] Total : 5148.60 643.57 0.00 0.00 3103.19 801.00 5097.24 00:26:25.068 { 00:26:25.068 "results": [ 00:26:25.068 { 00:26:25.068 "job": "nvme0n1", 00:26:25.068 "core_mask": "0x2", 00:26:25.068 "workload": "randread", 00:26:25.068 "status": "finished", 00:26:25.068 "queue_depth": 16, 00:26:25.068 "io_size": 131072, 00:26:25.068 "runtime": 2.003264, 00:26:25.068 "iops": 5148.597488898118, 00:26:25.068 "mibps": 643.5746861122648, 00:26:25.068 "io_failed": 0, 00:26:25.068 "io_timeout": 0, 00:26:25.068 "avg_latency_us": 3103.1926583787586, 00:26:25.068 "min_latency_us": 800.9955555555556, 00:26:25.068 "max_latency_us": 5097.2444444444445 00:26:25.068 } 00:26:25.068 ], 00:26:25.068 "core_count": 1 00:26:25.068 } 00:26:25.068 22:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:25.068 22:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:25.068 22:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:25.068 22:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:25.068 22:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:25.068 | select(.opcode=="crc32c") 00:26:25.068 | "\(.module_name) \(.executed)"' 00:26:25.326 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:25.326 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:25.326 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:25.326 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:25.326 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 170542 00:26:25.326 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 170542 ']' 00:26:25.327 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 170542 00:26:25.327 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:25.327 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.327 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170542 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170542' 00:26:25.585 killing process with pid 170542 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 170542 00:26:25.585 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.585 00:26:25.585 Latency(us) 00:26:25.585 [2024-12-10T21:57:33.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.585 [2024-12-10T21:57:33.317Z] =================================================================================================================== 00:26:25.585 [2024-12-10T21:57:33.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 170542 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=170954 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 170954 /var/tmp/bperf.sock 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 170954 ']' 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.585 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.846 [2024-12-10 22:57:33.352075] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:25.846 [2024-12-10 22:57:33.352176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170954 ] 00:26:25.846 [2024-12-10 22:57:33.419645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.846 [2024-12-10 22:57:33.476033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.107 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.107 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:26.107 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:26.107 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:26.107 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:26.366 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.366 22:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.624 nvme0n1 00:26:26.624 22:57:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:26.624 22:57:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.883 Running I/O for 2 seconds... 00:26:28.762 19013.00 IOPS, 74.27 MiB/s [2024-12-10T21:57:36.494Z] 18850.50 IOPS, 73.63 MiB/s 00:26:28.762 Latency(us) 00:26:28.762 [2024-12-10T21:57:36.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.762 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:28.762 nvme0n1 : 2.01 18849.39 73.63 0.00 0.00 6775.26 2706.39 11408.12 00:26:28.762 [2024-12-10T21:57:36.494Z] =================================================================================================================== 00:26:28.762 [2024-12-10T21:57:36.494Z] Total : 18849.39 73.63 0.00 0.00 6775.26 2706.39 11408.12 00:26:28.762 { 00:26:28.762 "results": [ 00:26:28.762 { 00:26:28.762 "job": "nvme0n1", 00:26:28.762 "core_mask": "0x2", 00:26:28.762 "workload": "randwrite", 00:26:28.762 "status": "finished", 00:26:28.762 "queue_depth": 128, 00:26:28.762 "io_size": 4096, 00:26:28.762 "runtime": 2.006484, 00:26:28.762 "iops": 18849.390276722865, 00:26:28.762 "mibps": 73.63043076844869, 00:26:28.762 "io_failed": 0, 00:26:28.762 "io_timeout": 0, 00:26:28.762 "avg_latency_us": 6775.257718590593, 00:26:28.762 "min_latency_us": 2706.394074074074, 00:26:28.762 "max_latency_us": 11408.118518518519 00:26:28.762 } 00:26:28.762 ], 00:26:28.762 "core_count": 1 00:26:28.762 } 00:26:28.762 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:28.762 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:28.762 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:28.762 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:28.762 | select(.opcode=="crc32c") 00:26:28.762 | "\(.module_name) \(.executed)"' 00:26:28.762 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 170954 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 170954 ']' 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 170954 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.021 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170954 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170954' 00:26:29.279 killing process with pid 170954 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 170954 00:26:29.279 Received shutdown signal, test time was about 2.000000 seconds 00:26:29.279 00:26:29.279 Latency(us) 00:26:29.279 [2024-12-10T21:57:37.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.279 [2024-12-10T21:57:37.011Z] =================================================================================================================== 00:26:29.279 [2024-12-10T21:57:37.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 170954 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=171363 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 171363 /var/tmp/bperf.sock 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 171363 ']' 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.279 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.280 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.280 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.280 22:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.538 [2024-12-10 22:57:37.032140] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:29.538 [2024-12-10 22:57:37.032222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171363 ] 00:26:29.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.538 Zero copy mechanism will not be used. 00:26:29.538 [2024-12-10 22:57:37.097245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.538 [2024-12-10 22:57:37.151275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.538 22:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.538 22:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:29.538 22:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:29.538 22:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:29.538 22:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:30.108 22:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.108 22:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.677 nvme0n1 00:26:30.677 22:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:30.677 22:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.677 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:30.677 Zero copy mechanism will not be used. 00:26:30.677 Running I/O for 2 seconds... 00:26:32.629 6058.00 IOPS, 757.25 MiB/s [2024-12-10T21:57:40.361Z] 6508.50 IOPS, 813.56 MiB/s 00:26:32.629 Latency(us) 00:26:32.629 [2024-12-10T21:57:40.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.629 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:32.629 nvme0n1 : 2.00 6507.44 813.43 0.00 0.00 2451.76 1462.42 8009.96 00:26:32.629 [2024-12-10T21:57:40.361Z] =================================================================================================================== 00:26:32.629 [2024-12-10T21:57:40.361Z] Total : 6507.44 813.43 0.00 0.00 2451.76 1462.42 8009.96 00:26:32.629 { 00:26:32.629 "results": [ 00:26:32.629 { 00:26:32.629 "job": "nvme0n1", 00:26:32.629 "core_mask": "0x2", 00:26:32.629 "workload": "randwrite", 00:26:32.629 "status": "finished", 00:26:32.629 "queue_depth": 16, 00:26:32.629 "io_size": 131072, 00:26:32.629 "runtime": 2.003554, 00:26:32.629 "iops": 6507.436285720275, 00:26:32.629 "mibps": 813.4295357150344, 00:26:32.629 "io_failed": 0, 00:26:32.629 "io_timeout": 0, 00:26:32.629 "avg_latency_us": 2451.7636310954304, 00:26:32.629 "min_latency_us": 1462.4237037037037, 00:26:32.629 "max_latency_us": 8009.955555555555 00:26:32.629 } 00:26:32.629 ], 00:26:32.629 "core_count": 1 00:26:32.629 } 00:26:32.629 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:32.629 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:32.629 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:32.629 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:32.629 | select(.opcode=="crc32c") 00:26:32.629 | "\(.module_name) \(.executed)"' 00:26:32.629 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 171363 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 171363 ']' 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 171363 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.888 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171363 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171363' 00:26:33.147 killing process with pid 171363 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 171363 00:26:33.147 Received shutdown signal, test time was about 2.000000 seconds 00:26:33.147 00:26:33.147 Latency(us) 00:26:33.147 [2024-12-10T21:57:40.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.147 [2024-12-10T21:57:40.879Z] =================================================================================================================== 00:26:33.147 [2024-12-10T21:57:40.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 171363 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 169992 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 169992 ']' 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 169992 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.147 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169992 00:26:33.406 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:33.406 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:33.406 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169992' 00:26:33.406 killing process with pid 169992 00:26:33.406 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 169992 00:26:33.406 22:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 169992 00:26:33.406 00:26:33.406 real 0m15.858s 00:26:33.406 user 0m31.514s 00:26:33.406 sys 0m4.467s 00:26:33.406 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.406 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:33.406 ************************************ 00:26:33.406 END TEST nvmf_digest_clean 00:26:33.406 ************************************ 00:26:33.406 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:33.406 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:33.406 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.406 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.666 ************************************ 00:26:33.666 START TEST nvmf_digest_error 00:26:33.666 ************************************ 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=171916 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 171916 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 171916 ']' 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.666 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.666 [2024-12-10 22:57:41.216534] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:33.666 [2024-12-10 22:57:41.216655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.666 [2024-12-10 22:57:41.290700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.666 [2024-12-10 22:57:41.348712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.666 [2024-12-10 22:57:41.348772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.666 [2024-12-10 22:57:41.348786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.666 [2024-12-10 22:57:41.348796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.666 [2024-12-10 22:57:41.348807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.666 [2024-12-10 22:57:41.349374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.925 [2024-12-10 22:57:41.466099] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.925 null0 00:26:33.925 [2024-12-10 22:57:41.585960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.925 [2024-12-10 22:57:41.610238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=171947 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 171947 /var/tmp/bperf.sock 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 171947 ']' 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.925 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.184 [2024-12-10 22:57:41.658331] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:34.184 [2024-12-10 22:57:41.658394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171947 ] 00:26:34.184 [2024-12-10 22:57:41.723964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.184 [2024-12-10 22:57:41.781533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.443 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.443 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:34.443 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.443 22:57:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.702 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:34.702 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.702 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.702 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.702 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.702 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.961 nvme0n1 00:26:34.961 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:34.961 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.961 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.961 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.961 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:34.961 22:57:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.221 Running I/O for 2 seconds... 00:26:35.221 [2024-12-10 22:57:42.808397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.221 [2024-12-10 22:57:42.808445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.808466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.822307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.822340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.822358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.837378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.837426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.837443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.849355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.849388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.849406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.864345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.864376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.864405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.880897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.880927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.880942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.896215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.896259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.896275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.910777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.910826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.910843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.921953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.921982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.921997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.935731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.935762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.935779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.222 [2024-12-10 22:57:42.949287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.222 [2024-12-10 22:57:42.949319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.222 [2024-12-10 22:57:42.949335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:42.961572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:42.961606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:42.961624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:42.979137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:42.979167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:42.979185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:42.992649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:42.992683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:42.992700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.004090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.004123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.004141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.018141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.018187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.018204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.030479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.030511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.030528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.041772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.041803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.041821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.055252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.055284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.055300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.068161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.068194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.068211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.082322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.082355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.082373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.097507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.097536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.097565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.111256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.111287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.111305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.122258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.122288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.122304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.136778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.136824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.136842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.151688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.151720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.151753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.162796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.162825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.162841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.176509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.176559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.176577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.191263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.191292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.191308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.484 [2024-12-10 22:57:43.207603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.484 [2024-12-10 22:57:43.207633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.484 [2024-12-10 22:57:43.207649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.745 [2024-12-10 22:57:43.223361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.745 [2024-12-10 22:57:43.223397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.223413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.237031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.237061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.237076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.251012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.251044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.251061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.263180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.263208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.263224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.275587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.275616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.275632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.289180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.289209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.289224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.302284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.302313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.302328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.316667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.316699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.316716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.331464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.331495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.331512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.348004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.348036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.348054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.359920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.359949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.359966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.373050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.373079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.373094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.389411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.389440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.389457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.401974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.402002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.402017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.414597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.414626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.414641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.430232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.430263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.430281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.447745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.447777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.447794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.746 [2024-12-10 22:57:43.462364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:35.746 [2024-12-10 22:57:43.462396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.746 [2024-12-10 22:57:43.462419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.477785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.477818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.477837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.489389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.489421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.489438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.505838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.505881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.505896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.521967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.521996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.522012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.535869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.535901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.535919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.552411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.552441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.552457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.562796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.562843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.562860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.578418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.578447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.578463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.591805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.591857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.591876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.604248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.604278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.604293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.620353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.620381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.620397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.633560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.633592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.633610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.649444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.649475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.006 [2024-12-10 22:57:43.649492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.006 [2024-12-10 22:57:43.661215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.006 [2024-12-10 22:57:43.661245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.007 [2024-12-10 22:57:43.661261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.007 [2024-12-10 22:57:43.675650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.007 [2024-12-10 22:57:43.675680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.007 [2024-12-10 22:57:43.675696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.007 [2024-12-10 22:57:43.688462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.007 [2024-12-10 22:57:43.688508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.007 [2024-12-10 22:57:43.688526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.007 [2024-12-10 22:57:43.702971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.007 [2024-12-10 22:57:43.703002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.007 [2024-12-10 22:57:43.703039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.007 [2024-12-10 22:57:43.714120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.007 [2024-12-10 22:57:43.714151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.007 [2024-12-10 22:57:43.714168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.007 [2024-12-10 22:57:43.730329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.007 [2024-12-10 22:57:43.730361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.007 [2024-12-10 22:57:43.730400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.745652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.745682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.745708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.760532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.760588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.760612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.770758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.770787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.770806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.786755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.786785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.786804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 18155.00 IOPS, 70.92 MiB/s [2024-12-10T21:57:43.998Z] [2024-12-10 22:57:43.801650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.801697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.801716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.816103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.816134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.828659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.828696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.828714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.838821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.838864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.838880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.852996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.853034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.853053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.868760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.868792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.868810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.882605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.882636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.882654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.893902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.893930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.893948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.907619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.907649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.907667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.920391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.920422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.920440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.266 [2024-12-10 22:57:43.933322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.266 [2024-12-10 22:57:43.933360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.266 [2024-12-10 22:57:43.933378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.267 [2024-12-10 22:57:43.944113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.267 [2024-12-10 22:57:43.944142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.267 [2024-12-10 22:57:43.944160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.267 [2024-12-10 22:57:43.960209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.267 [2024-12-10 22:57:43.960238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.267 [2024-12-10 22:57:43.960262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.267 [2024-12-10 22:57:43.975255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.267 [2024-12-10 22:57:43.975284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.267 [2024-12-10 22:57:43.975301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.267 [2024-12-10 22:57:43.990956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.267 [2024-12-10 22:57:43.990999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.267 [2024-12-10 22:57:43.991016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.525 [2024-12-10 22:57:44.006835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.525 [2024-12-10 22:57:44.006878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.525 [2024-12-10 22:57:44.006901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.525 [2024-12-10 22:57:44.021936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.021967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.021990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.037208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.037239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.037262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.050910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.050940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.050958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.061510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.061561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.061587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.076701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.076732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.076753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.092154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.092185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.092203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.105579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.105611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.105630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.117448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.117477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.117495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.133517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.133579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.133596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.148234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.148268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.148300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.163598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.163628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.163645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.176783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.176816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.176834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.188753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.188786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.188804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.202426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.202455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.202476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.216564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.216609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.216627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.233760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.233793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.233811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.526 [2024-12-10 22:57:44.247578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.526 [2024-12-10 22:57:44.247609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.526 [2024-12-10 22:57:44.247625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.261660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.261708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.261727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.275055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.275101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.275125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.289741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.289777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.289799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.304154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.304186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.304220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.315740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.315772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.315789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.330463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.330493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.330526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.345640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.345672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.345688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.357042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.357072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.357092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.370713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.370746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.370764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.386315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.386344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.386363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.402327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.402357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.402376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.416868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.416911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.416930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.428142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.428175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.428192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.443108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.443139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.443156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.457113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.787 [2024-12-10 22:57:44.457158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.787 [2024-12-10 22:57:44.457182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.787 [2024-12-10 22:57:44.469781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.788 [2024-12-10 22:57:44.469813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.788 [2024-12-10 22:57:44.469831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.788 [2024-12-10 22:57:44.481785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.788 [2024-12-10 22:57:44.481831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.788 [2024-12-10 22:57:44.481849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.788 [2024-12-10 22:57:44.495936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.788 [2024-12-10 22:57:44.495967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.788 [2024-12-10 22:57:44.495985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.788 [2024-12-10 22:57:44.509748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:36.788 [2024-12-10 22:57:44.509781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.788 [2024-12-10 22:57:44.509798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.521792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.521838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.521855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.535433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.535461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.535477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.548086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.548134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.548151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.562521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.562559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.562578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.578020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.578051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.578067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.592401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.592433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.592450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.604212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.604240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.604255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.618183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.618211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.618227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.047 [2024-12-10 22:57:44.634725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.047 [2024-12-10 22:57:44.634756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.047 [2024-12-10 22:57:44.634773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.647634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.647667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.647684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.661654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.661683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.661708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.676617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.676649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.676667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.689710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.689742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.689761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.702341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.702372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.702389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.715329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.715361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.715377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.727984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.728014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.728031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.743034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.743063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.743078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.758183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.758227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.048 [2024-12-10 22:57:44.772497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.048 [2024-12-10 22:57:44.772527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.048 [2024-12-10 22:57:44.772566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.306 [2024-12-10 22:57:44.785695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.306 [2024-12-10 22:57:44.785732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-12-10 22:57:44.785750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.306 18298.50 IOPS, 71.48 MiB/s [2024-12-10T21:57:45.039Z] [2024-12-10 22:57:44.797143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1768390) 00:26:37.307 [2024-12-10 22:57:44.797171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-12-10 22:57:44.797186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.307 00:26:37.307 Latency(us) 00:26:37.307 [2024-12-10T21:57:45.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.307 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:37.307 nvme0n1 : 2.05 17954.04 70.13 0.00 0.00 6980.99 3543.80 45826.65 00:26:37.307 [2024-12-10T21:57:45.039Z] =================================================================================================================== 00:26:37.307 [2024-12-10T21:57:45.039Z] Total : 17954.04 70.13 0.00 0.00 6980.99 3543.80 45826.65 00:26:37.307 { 00:26:37.307 "results": [ 00:26:37.307 { 00:26:37.307 "job": "nvme0n1", 00:26:37.307 "core_mask": "0x2", 00:26:37.307 "workload": "randread", 00:26:37.307 "status": "finished", 00:26:37.307 "queue_depth": 128, 00:26:37.307 "io_size": 4096, 00:26:37.307 "runtime": 2.045501, 00:26:37.307 "iops": 17954.03668832232, 00:26:37.307 "mibps": 70.13295581375907, 00:26:37.307 "io_failed": 0, 00:26:37.307 "io_timeout": 0, 00:26:37.307 "avg_latency_us": 6980.993594876837, 00:26:37.307 "min_latency_us": 3543.7985185185184, 00:26:37.307 "max_latency_us": 45826.654814814814 00:26:37.307 } 00:26:37.307 ], 00:26:37.307 "core_count": 1 00:26:37.307 } 00:26:37.307 22:57:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:37.307 22:57:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:37.307 22:57:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:37.307 22:57:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:37.307 | .driver_specific 00:26:37.307 | .nvme_error 00:26:37.307 | .status_code 00:26:37.307 | .command_transient_transport_error' 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 171947 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 171947 ']' 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 171947 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171947 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171947' 00:26:37.571 killing process with pid 171947 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 171947 00:26:37.571 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.571 00:26:37.571 Latency(us) 00:26:37.571 [2024-12-10T21:57:45.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.571 [2024-12-10T21:57:45.303Z] =================================================================================================================== 00:26:37.571 [2024-12-10T21:57:45.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.571 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 171947 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=172474 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 172474 /var/tmp/bperf.sock 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 172474 ']' 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.829 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:37.829 [2024-12-10 22:57:45.426144] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:37.829 [2024-12-10 22:57:45.426239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172474 ] 00:26:37.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:37.829 Zero copy mechanism will not be used. 00:26:37.829 [2024-12-10 22:57:45.491619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.829 [2024-12-10 22:57:45.546688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.088 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.088 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:38.088 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.088 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.346 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:38.346 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.346 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.346 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.346 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.346 22:57:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.917 nvme0n1 00:26:38.917 22:57:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:38.917 22:57:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.917 22:57:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.917 22:57:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.917 22:57:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:38.917 22:57:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:38.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:38.917 Zero copy mechanism will not be used. 00:26:38.917 Running I/O for 2 seconds... 00:26:38.917 [2024-12-10 22:57:46.548161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.548232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.548254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.553826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.553874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.553892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.559613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.559661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.559678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.565301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.565347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.565364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.571106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.571138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.576876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.576910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.576957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.583325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.583375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.583393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.589468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.589500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.589533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.596013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.596061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.596078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.603262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.603308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.603324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.609915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.609948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.609979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.616693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.616727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.622692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.917 [2024-12-10 22:57:46.622725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-12-10 22:57:46.622743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:38.917 [2024-12-10 22:57:46.628606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.918 [2024-12-10 22:57:46.628639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.918 [2024-12-10 22:57:46.628657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:38.918 [2024-12-10 22:57:46.635992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.918 [2024-12-10 22:57:46.636032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.918 [2024-12-10 22:57:46.636051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:38.918 [2024-12-10 22:57:46.642999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:38.918 [2024-12-10 22:57:46.643047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.918 [2024-12-10 22:57:46.643066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.179 [2024-12-10 22:57:46.648896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.179 [2024-12-10 22:57:46.648930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.179 [2024-12-10 22:57:46.648949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.179 [2024-12-10 22:57:46.654838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.179 [2024-12-10 22:57:46.654870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.179 [2024-12-10 22:57:46.654887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.179 [2024-12-10 22:57:46.660659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.179 [2024-12-10 22:57:46.660703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.179 [2024-12-10 22:57:46.660722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.179 [2024-12-10 22:57:46.666412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.666460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.666479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.673392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.673441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.673459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.681382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.681414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.681432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.688252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.688286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.688305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.695289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.695323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.695341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.701458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.701492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.701511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.705316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.705349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.705367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.709286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.709319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.709337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.713933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.713965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.713999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.718645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.718677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.718694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.723387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.723418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.723436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.728482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.728515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.728533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.735202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.735234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.735275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.742867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.742898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.742915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.750648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.750695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.750713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.758326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.758358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.758376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.766079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.766111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.766148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.774094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.774126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.774158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.781912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.781942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.781959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.789807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.789839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.789871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.797388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.797419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.797435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.804992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.805047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.805066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.812555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.812589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.180 [2024-12-10 22:57:46.812607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.180 [2024-12-10 22:57:46.820070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.180 [2024-12-10 22:57:46.820104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.820122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.827748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.827782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.827801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.835529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.835583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.835602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.843411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.843443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.843460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.850315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.850347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.850365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.855932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.855979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.855997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.861378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.861410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.861427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.866742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.866775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.866793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.872196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.872244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.877051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.877084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.877102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.881677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.881710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.881728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.886269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.886315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.886332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.890778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.890810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.890827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.895569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.895601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.895618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.900716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.900748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.900781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.181 [2024-12-10 22:57:46.905882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.181 [2024-12-10 22:57:46.905929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.181 [2024-12-10 22:57:46.905969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.910644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.910676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.910693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.915694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.915727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.915744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.921949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.922001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.922018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.930060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.930106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.930123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.936863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.936895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.936928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.943571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.943604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.943623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.948838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.948887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.948905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.954254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.954286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.443 [2024-12-10 22:57:46.954321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.443 [2024-12-10 22:57:46.959900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.443 [2024-12-10 22:57:46.959931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.959949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:46.964902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:46.964935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.964953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:46.970065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:46.970098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.970116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:46.974756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:46.974788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.974806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:46.979383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:46.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.979447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:46.984222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:46.984268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.984286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:46.989978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:46.990010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.990043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:46.995831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:46.995879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:46.995897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.001590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.001623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.001649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.006895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.006927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.006961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.012640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.012674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.012692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.017931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.017964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.017990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.023103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.023135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.023154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.028516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.028556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.028577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.033448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.033480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.033497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.038024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.038068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.038087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.042641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.042673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.042692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.047385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.047424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.047443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.051967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.051999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.052017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.056692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.056724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.056742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.061338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.061371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.061388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.066795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.066827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.066846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.072243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.072275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.072293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.077994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.078027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.078045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.084031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.084064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.084083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.091529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.444 [2024-12-10 22:57:47.091571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.444 [2024-12-10 22:57:47.091590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.444 [2024-12-10 22:57:47.099313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.099347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.099366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.105910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.105943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.105961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.111686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.111719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.111737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.116932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.116964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.116983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.123167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.123198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.123216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.130820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.130853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.130871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.136866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.136899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.136917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.143193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.143225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.143243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.149314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.149348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.149374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.155458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.155490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.161567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.161610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.161628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.445 [2024-12-10 22:57:47.167232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.445 [2024-12-10 22:57:47.167265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.445 [2024-12-10 22:57:47.167284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.173534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.173576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.173595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.179757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.179790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.179807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.185852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.185885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.185904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.191956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.191989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.192008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.197922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.197956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.197974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.204180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.204221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.204240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.210210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.210243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.210261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.216419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.216453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.216472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.222461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.222495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.222513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.226500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.226533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.226559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.706 [2024-12-10 22:57:47.231388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.706 [2024-12-10 22:57:47.231421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.706 [2024-12-10 22:57:47.231455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.237499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.237532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.237562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.243476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.243510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.243528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.248761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.248793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.248812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.254799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.254832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.254866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.260621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.260655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.260673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.266461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.266494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.266512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.271739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.271789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.271806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.276719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.276751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.276769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.281459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.281490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.281508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.285998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.286028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.286045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.290746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.290778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.290796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.295574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.295614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.295645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.300471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.300503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.300536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.305197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.305229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.305249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.309907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.309957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.309975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.315192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.315227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.315245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.320212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.320244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.320262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.324867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.324898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.324916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.329527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.329567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.329585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.334159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.334208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.334226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.338810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.338852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.338870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.343449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.343499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.343517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.348032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.348076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.348094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.352668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.352700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.352717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.707 [2024-12-10 22:57:47.357273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.707 [2024-12-10 22:57:47.357306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.707 [2024-12-10 22:57:47.357338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.361997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.362045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.362062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.366686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.366718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.366751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.372139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.372186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.372205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.379048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.379082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.379100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.386131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.386164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.386182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.391554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.391587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.391604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.397189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.397222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.397239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.402309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.402358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.402375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.408429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.408463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.408497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.413201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.413235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.413253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.417953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.418001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.418018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.422741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.422774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.422791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.428003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.428037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.428064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.708 [2024-12-10 22:57:47.433055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.708 [2024-12-10 22:57:47.433088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.708 [2024-12-10 22:57:47.433106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.438842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.438877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.438895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.446032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.446066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.446098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.451861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.451895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.451913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.456668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.456702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.456720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.462059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.462093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.462111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.468282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.468316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.468335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.473497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.473530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.473556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.478998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.479031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.479050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.485245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.485279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.485298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.490799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.490833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.490851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.496207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.496241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.496260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.501321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.501355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.501373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.506078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.506110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.506128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.510792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.969 [2024-12-10 22:57:47.510823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.969 [2024-12-10 22:57:47.510841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.969 [2024-12-10 22:57:47.515605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.515636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.515655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.520326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.520359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.520386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.524953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.524985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.525003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.529532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.529572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.529591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.534288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.534321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.534344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.539932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.539966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.539984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.970 5402.00 IOPS, 675.25 MiB/s [2024-12-10T21:57:47.702Z] [2024-12-10 22:57:47.545681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.545720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.545739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.551038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.551070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.551088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.558217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.558250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.558282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.564414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.564449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.564467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.570449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.570494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.570514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.575856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.575890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.575909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.581810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.581844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.581862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.588161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.588195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.588214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.594743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.594777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.594795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.601322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.601357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.601375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.606688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.606722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.606742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.611863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.611897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.611915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.617101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.617134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.617152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.622004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.622037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.622055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.627617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.627650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.627668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.633094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.633137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.633155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.636282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.636315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.636333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.641179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.641212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.641230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.646626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.646659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.970 [2024-12-10 22:57:47.646677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.970 [2024-12-10 22:57:47.651432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.970 [2024-12-10 22:57:47.651464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.651481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.656684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.656716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.656735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.662334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.662378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.662406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.668002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.668037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.668056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.673394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.673427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.673446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.678481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.678515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.678534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.683461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.683494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.683513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.688436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.688470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.688488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.971 [2024-12-10 22:57:47.693758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:39.971 [2024-12-10 22:57:47.693790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.971 [2024-12-10 22:57:47.693809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.233 [2024-12-10 22:57:47.699837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.233 [2024-12-10 22:57:47.699873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.233 [2024-12-10 22:57:47.699891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.233 [2024-12-10 22:57:47.705586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.233 [2024-12-10 22:57:47.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.233 [2024-12-10 22:57:47.705651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.233 [2024-12-10 22:57:47.711591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.233 [2024-12-10 22:57:47.711641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.233 [2024-12-10 22:57:47.711660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.233 [2024-12-10 22:57:47.715895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.715929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.715947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.720473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.720507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.720525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.726224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.726271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.726288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.731931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.731978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.731996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.738074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.738107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.738125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.744115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.744149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.744168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.750323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.750371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.750389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.756703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.756736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.756754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.762598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.762631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.762649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.768794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.768828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.768847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.775243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.775276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.775294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.780298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.780332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.780351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.783652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.783685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.783703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.788807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.788841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.788859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.793944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.793976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.793993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.799027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.799060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.799078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.805111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.805144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.805173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.811375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.811412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.811432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.816961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.816994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.817027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.823178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.823213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.823235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.829979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.830013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.835930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.835964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.835982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.841996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.842031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.842049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.848185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.848220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.848239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.852922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.234 [2024-12-10 22:57:47.852956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.234 [2024-12-10 22:57:47.852974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.234 [2024-12-10 22:57:47.858703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.858736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.858754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.865476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.865509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.865542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.871486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.871520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.871538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.876676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.876708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.876727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.881494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.881526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.881552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.886653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.886685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.886703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.891781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.891815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.891833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.896491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.896524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.896541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.901397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.901429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.901457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.907084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.907116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.907134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.912614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.912647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.912664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.917681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.917714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.917732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.923452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.923486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.923504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.929288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.929322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.929340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.935137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.935171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.935189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.940394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.940427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.940445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.944556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.944588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.944606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.949125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.949167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.949187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.953693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.953726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.953744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.235 [2024-12-10 22:57:47.958395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.235 [2024-12-10 22:57:47.958426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.235 [2024-12-10 22:57:47.958444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.963061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.963094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.963126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.967635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.967667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.967685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.972365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.972396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.972412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.977067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.977114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.977133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.981916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.981948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.981965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.986558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.986590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.986607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.991108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.991142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.991159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:47.995716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:47.995748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:47.995765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:48.000431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:48.000464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:48.000496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:48.005016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:48.005048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.495 [2024-12-10 22:57:48.005066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.495 [2024-12-10 22:57:48.009600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.495 [2024-12-10 22:57:48.009633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.009651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.014330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.014363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.014380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.019032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.019079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.019096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.023837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.023869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.023900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.028536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.028594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.028626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.033328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.033361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.033379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.037928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.037960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.037978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.043297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.043330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.043363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.048064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.048096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.048114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.052690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.052722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.052739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.057157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.057189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.057206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.062559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.062591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.062609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.069243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.069277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.069296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.077001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.077046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.077066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.084296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.084345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.084363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.091388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.091423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.091442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.098123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.098157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.098191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.104379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.104427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.104444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.110633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.110666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.110685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.116989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.117023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.117041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.123436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.123470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.123489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.129735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.129768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.129787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.135865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.135898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.135917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.141199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.141233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.141251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.145710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.145743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.145760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.150191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.496 [2024-12-10 22:57:48.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.496 [2024-12-10 22:57:48.150240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.496 [2024-12-10 22:57:48.154725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.154756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.159468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.159514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.159531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.165031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.165078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.165095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.169803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.169835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.169852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.175216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.175250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.175279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.181126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.181160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.181197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.188749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.188783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.188802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.195999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.196033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.196051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.202337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.202370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.202402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.208739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.208772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.208790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.215283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.215315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.215332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.497 [2024-12-10 22:57:48.221712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.497 [2024-12-10 22:57:48.221745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.497 [2024-12-10 22:57:48.221764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.228095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.228129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.228147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.234107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.234151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.234169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.240481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.240514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.240532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.248123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.248155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.248173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.254921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.254954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.254986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.260800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.260842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.260860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.266565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.266606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.266624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.272349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.272381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.272398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.278244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.278292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.278310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.283474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.283508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.283526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.289103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.289136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.289154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.294799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.294832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.294851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.299789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.299822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.299840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.305365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.305399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.305417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.311487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.311522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.311540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.317578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.317611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.317629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.322337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.322370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.322388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.328006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.328046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.328070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.334373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.334407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.334437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.339749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.339782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.339801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.345114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.345148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.345167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.351668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.758 [2024-12-10 22:57:48.351727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.758 [2024-12-10 22:57:48.359515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.758 [2024-12-10 22:57:48.359572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.359606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.365977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.366011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.366030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.374153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.374186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.374203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.378806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.378838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.378857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.383516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.383560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.383582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.389423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.389464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.389497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.395652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.395685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.395703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.402650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.402684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.402702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.409182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.409215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.409233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.415788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.415835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.415855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.421892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.421926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.421944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.427596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.427629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.427647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.435117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.435149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.435181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.442770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.442804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.442823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.450633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.450667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.450685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.456471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.456509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.456529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.460066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.460097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.460114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.467421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.467452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.467470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.474098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.474129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.474147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.480117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.480148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.480166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.759 [2024-12-10 22:57:48.486469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:40.759 [2024-12-10 22:57:48.486500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.759 [2024-12-10 22:57:48.486517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.493382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.493431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.493449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.498869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.498917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.498944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.504450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.504489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.504512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.508727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.508760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.508779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.512764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.512796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.512814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.515712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.515743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.515761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.520005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.520038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.520056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.523850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.523882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.523900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.526881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.526912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.526929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.531240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.531272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.531290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.536000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.536031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.536048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.019 [2024-12-10 22:57:48.540756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.540787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.540804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.019 5478.00 IOPS, 684.75 MiB/s [2024-12-10T21:57:48.751Z] [2024-12-10 22:57:48.547264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa1b8a0) 00:26:41.019 [2024-12-10 22:57:48.547310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.019 [2024-12-10 22:57:48.547327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.019 00:26:41.019 Latency(us) 00:26:41.019 [2024-12-10T21:57:48.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.019 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:41.019 nvme0n1 : 2.00 5476.75 684.59 0.00 0.00 2916.72 728.18 8446.86 00:26:41.019 [2024-12-10T21:57:48.751Z] =================================================================================================================== 00:26:41.019 [2024-12-10T21:57:48.751Z] Total : 5476.75 684.59 0.00 0.00 2916.72 728.18 8446.86 00:26:41.019 { 00:26:41.019 "results": [ 00:26:41.019 { 00:26:41.019 "job": "nvme0n1", 00:26:41.019 "core_mask": "0x2", 00:26:41.019 "workload": "randread", 00:26:41.020 "status": "finished", 00:26:41.020 "queue_depth": 16, 00:26:41.020 "io_size": 131072, 00:26:41.020 "runtime": 2.003378, 00:26:41.020 "iops": 5476.749769639079, 00:26:41.020 "mibps": 684.5937212048849, 00:26:41.020 "io_failed": 0, 00:26:41.020 "io_timeout": 0, 00:26:41.020 "avg_latency_us": 2916.719534437828, 00:26:41.020 "min_latency_us": 728.1777777777778, 00:26:41.020 "max_latency_us": 8446.862222222222 00:26:41.020 } 00:26:41.020 ], 00:26:41.020 "core_count": 1 00:26:41.020 } 00:26:41.020 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.020 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.020 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:41.020 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.020 | .driver_specific 00:26:41.020 | .nvme_error 00:26:41.020 | .status_code 00:26:41.020 | .command_transient_transport_error' 00:26:41.279 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 354 > 0 )) 00:26:41.279 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 172474 00:26:41.279 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 172474 ']' 00:26:41.279 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 172474 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172474 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172474' 00:26:41.280 killing process with pid 172474 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 172474 00:26:41.280 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.280 00:26:41.280 Latency(us) 00:26:41.280 [2024-12-10T21:57:49.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.280 [2024-12-10T21:57:49.012Z] =================================================================================================================== 00:26:41.280 [2024-12-10T21:57:49.012Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.280 22:57:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 172474 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=172878 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 172878 /var/tmp/bperf.sock 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 172878 ']' 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.538 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:41.538 [2024-12-10 22:57:49.172278] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:41.538 [2024-12-10 22:57:49.172376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172878 ] 00:26:41.538 [2024-12-10 22:57:49.237702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.796 [2024-12-10 22:57:49.291621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.796 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.796 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:41.796 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:41.796 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.055 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:42.055 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.055 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.055 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.055 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.055 22:57:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.623 nvme0n1 00:26:42.623 22:57:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:42.623 22:57:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.623 22:57:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.623 22:57:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.623 22:57:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:42.623 22:57:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.623 Running I/O for 2 seconds... 00:26:42.623 [2024-12-10 22:57:50.335191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef57b0 00:26:42.623 [2024-12-10 22:57:50.336386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.623 [2024-12-10 22:57:50.336427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:42.623 [2024-12-10 22:57:50.347769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee88f8 00:26:42.623 [2024-12-10 22:57:50.348480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.623 [2024-12-10 22:57:50.348527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:42.883 [2024-12-10 22:57:50.363284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efda78 00:26:42.883 [2024-12-10 22:57:50.365300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.883 [2024-12-10 22:57:50.365346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:42.883 [2024-12-10 22:57:50.371892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee8088 00:26:42.883 [2024-12-10 22:57:50.372776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.883 [2024-12-10 22:57:50.372806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:42.883 [2024-12-10 22:57:50.383433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efd640 00:26:42.883 [2024-12-10 22:57:50.384322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.883 [2024-12-10 22:57:50.384367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:42.883 [2024-12-10 22:57:50.398345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee0630 00:26:42.883 [2024-12-10 22:57:50.399661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.883 [2024-12-10 22:57:50.399692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:42.883 [2024-12-10 22:57:50.409515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016eeee38 00:26:42.883 [2024-12-10 22:57:50.410823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.883 [2024-12-10 22:57:50.410868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:42.883 [2024-12-10 22:57:50.422130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efbcf0 00:26:42.883 [2024-12-10 22:57:50.423528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.883 [2024-12-10 22:57:50.423581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.431672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016edece0 00:26:42.884 [2024-12-10 22:57:50.432419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.432462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.444768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee3d08 00:26:42.884 [2024-12-10 22:57:50.445432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.445461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.459157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee6738 00:26:42.884 [2024-12-10 22:57:50.460886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.460917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.467747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016edf988 00:26:42.884 [2024-12-10 22:57:50.468532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.468584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.480365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efa3a0 00:26:42.884 [2024-12-10 22:57:50.481424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.495010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef35f0 00:26:42.884 [2024-12-10 22:57:50.496618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.496653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.507225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee1710 00:26:42.884 [2024-12-10 22:57:50.508913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.508956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.515588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef7da8 00:26:42.884 [2024-12-10 22:57:50.516336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.516379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.530246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efac10 00:26:42.884 [2024-12-10 22:57:50.531646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.531691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.542127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee8d30 00:26:42.884 [2024-12-10 22:57:50.543145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.543175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.553596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016eebb98 00:26:42.884 [2024-12-10 22:57:50.554939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.554983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.565490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee5220 00:26:42.884 [2024-12-10 22:57:50.566724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.566753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.577654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016eeaef0 00:26:42.884 [2024-12-10 22:57:50.578962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.579006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.589518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef9f68 00:26:42.884 [2024-12-10 22:57:50.590715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.590754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.601159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016eecc78 00:26:42.884 [2024-12-10 22:57:50.602429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.884 [2024-12-10 22:57:50.602475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:42.884 [2024-12-10 22:57:50.612456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee5658 00:26:43.145 [2024-12-10 22:57:50.613782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.613812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.624254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee1710 00:26:43.145 [2024-12-10 22:57:50.625448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.625491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.638582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efa3a0 00:26:43.145 [2024-12-10 22:57:50.640363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.640408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.647047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef0ff8 00:26:43.145 [2024-12-10 22:57:50.647953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.647996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.661330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee88f8 00:26:43.145 [2024-12-10 22:57:50.662942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.662971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.670635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee4de8 00:26:43.145 [2024-12-10 22:57:50.671487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.671530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.685370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efeb58 00:26:43.145 [2024-12-10 22:57:50.686855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.686885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.696458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee0630 00:26:43.145 [2024-12-10 22:57:50.697963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.698007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.708844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016eea680 00:26:43.145 [2024-12-10 22:57:50.710397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.710440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.719984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016eeaef0 00:26:43.145 [2024-12-10 22:57:50.721451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.721481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.731588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef0bc0 00:26:43.145 [2024-12-10 22:57:50.732880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.732924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.743392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef1868 00:26:43.145 [2024-12-10 22:57:50.744837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.744868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.755111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef4298 00:26:43.145 [2024-12-10 22:57:50.756290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.756335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.766655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef3e60 00:26:43.145 [2024-12-10 22:57:50.767957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.768000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.777636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef2510 00:26:43.145 [2024-12-10 22:57:50.778806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.778851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.789413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef3e60 00:26:43.145 [2024-12-10 22:57:50.790413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.790457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.801648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016edf118 00:26:43.145 [2024-12-10 22:57:50.802795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.802830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.813394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee12d8 00:26:43.145 [2024-12-10 22:57:50.814222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.814266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.824405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efb480 00:26:43.145 [2024-12-10 22:57:50.825104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.825149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.837231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016eedd58 00:26:43.145 [2024-12-10 22:57:50.838511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.838561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.847959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef5be8 00:26:43.145 [2024-12-10 22:57:50.849115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.849145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.859955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee3d08 00:26:43.145 [2024-12-10 22:57:50.861087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.861133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.145 [2024-12-10 22:57:50.872091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efd640 00:26:43.145 [2024-12-10 22:57:50.872794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.145 [2024-12-10 22:57:50.872825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.407 [2024-12-10 22:57:50.885816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef2510 00:26:43.407 [2024-12-10 22:57:50.887280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.407 [2024-12-10 22:57:50.887325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.896975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee6b70 00:26:43.408 [2024-12-10 22:57:50.898341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.898385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.908776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef0bc0 00:26:43.408 [2024-12-10 22:57:50.910116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.910165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.919438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee6738 00:26:43.408 [2024-12-10 22:57:50.920725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.920755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.931348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee49b0 00:26:43.408 [2024-12-10 22:57:50.932352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.932395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.943024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efbcf0 00:26:43.408 [2024-12-10 22:57:50.944115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.944159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.955044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef6458 00:26:43.408 [2024-12-10 22:57:50.955710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.955740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.966937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efe720 00:26:43.408 [2024-12-10 22:57:50.967986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.968029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.978809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ef1868 00:26:43.408 [2024-12-10 22:57:50.979532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.979589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:50.993742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016efeb58 00:26:43.408 [2024-12-10 22:57:50.995677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:50.995722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.002267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee95a0 00:26:43.408 [2024-12-10 22:57:51.003268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.003312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.016571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.016760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.016790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.030604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.030830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.030886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.044682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.044940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.044982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.058795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.059066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.059110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.072828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.072998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.073024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.086914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.087181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.087225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.101001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.101217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.101259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.114736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.114948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.114991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.408 [2024-12-10 22:57:51.128711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.408 [2024-12-10 22:57:51.128901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.408 [2024-12-10 22:57:51.128943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.670 [2024-12-10 22:57:51.142924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.670 [2024-12-10 22:57:51.143144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.670 [2024-12-10 22:57:51.143188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.670 [2024-12-10 22:57:51.156793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.670 [2024-12-10 22:57:51.156968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.670 [2024-12-10 22:57:51.156995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.670 [2024-12-10 22:57:51.170834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.670 [2024-12-10 22:57:51.171047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.171090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.184811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.184996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.185037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.198499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.198672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.198700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.212015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.212182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.212210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.225569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.225733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.225760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.239043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.239213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.239239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.252741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.252992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.253027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.266193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.266361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.266387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.279717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.279915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.279958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.293181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.293349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.293376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.306644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.306832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.306873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.320210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.320395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.320422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 20445.00 IOPS, 79.86 MiB/s [2024-12-10T21:57:51.403Z] [2024-12-10 22:57:51.333810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.333997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.334024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.347311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.347485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.347512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.360915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.361090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.361119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.374205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.374376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.374406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.671 [2024-12-10 22:57:51.387657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.671 [2024-12-10 22:57:51.387912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.671 [2024-12-10 22:57:51.387943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.401454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.401629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.401657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.415100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.415364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.415395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.428665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.428814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.428855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.442290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.442494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.442537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.455929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.456099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.456126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.469430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.469625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.469653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.482903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.483071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.483099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.496278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.496447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.496474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.509770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.510029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.510058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.523198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.523367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.523393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.536708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.536954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.536999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.550271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.550441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.550467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.563757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.563977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.564003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.577310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.577586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.577617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.590654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.590802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.590830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.604080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.604249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.604283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.617627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.617852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.617879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.630968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.631136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.631168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.644484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.644728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.644759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.931 [2024-12-10 22:57:51.658154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:43.931 [2024-12-10 22:57:51.658325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.931 [2024-12-10 22:57:51.658354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.671898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.672158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.672185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.685499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.685690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.685719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.698989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.699158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.699187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.712638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.712879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.712923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.726105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.726282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.726310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.739753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.739930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.739960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.753301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.753489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.753517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.766997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.767168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.767195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.780457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.780695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.780725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.793919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.794091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.794118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.807437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.807632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.807661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.820860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.821044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.821071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.834284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.834567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.834598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.847815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.848082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.848110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.861361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.861596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.861623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.874863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.875034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.193 [2024-12-10 22:57:51.875060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.193 [2024-12-10 22:57:51.887959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.193 [2024-12-10 22:57:51.888127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.194 [2024-12-10 22:57:51.888156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.194 [2024-12-10 22:57:51.901354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.194 [2024-12-10 22:57:51.901524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.194 [2024-12-10 22:57:51.901573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.194 [2024-12-10 22:57:51.915148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.194 [2024-12-10 22:57:51.915428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.194 [2024-12-10 22:57:51.915458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:51.928896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:51.929068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:51.929095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:51.942429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:51.942644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:51.942672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:51.956005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:51.956194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:51.956232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:51.969636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:51.969978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:51.970008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:51.983415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:51.983613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:51.983642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:51.996895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:51.997136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:51.997164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:52.010587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:52.010764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:52.010792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:52.024093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:52.024262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:52.024288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:52.037701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:52.037929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:52.037956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:52.051247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:52.051416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:52.051442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:52.064739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:52.064963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:52.064990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:52.078215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:52.078398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.453 [2024-12-10 22:57:52.078426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.453 [2024-12-10 22:57:52.091602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.453 [2024-12-10 22:57:52.091776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-12-10 22:57:52.091806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.454 [2024-12-10 22:57:52.105066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.454 [2024-12-10 22:57:52.105237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-12-10 22:57:52.105264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.454 [2024-12-10 22:57:52.118728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.454 [2024-12-10 22:57:52.118886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-12-10 22:57:52.118928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.454 [2024-12-10 22:57:52.132229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.454 [2024-12-10 22:57:52.132397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-12-10 22:57:52.132439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.454 [2024-12-10 22:57:52.145445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.454 [2024-12-10 22:57:52.145641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-12-10 22:57:52.145673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.454 [2024-12-10 22:57:52.159034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.454 [2024-12-10 22:57:52.159204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-12-10 22:57:52.159233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.454 [2024-12-10 22:57:52.172512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.454 [2024-12-10 22:57:52.172701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-12-10 22:57:52.172731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.186180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.712 [2024-12-10 22:57:52.186323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.712 [2024-12-10 22:57:52.186352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.199758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.712 [2024-12-10 22:57:52.200004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.712 [2024-12-10 22:57:52.200032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.213428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.712 [2024-12-10 22:57:52.213681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.712 [2024-12-10 22:57:52.213712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.227514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.712 [2024-12-10 22:57:52.227741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.712 [2024-12-10 22:57:52.227785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.241509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.712 [2024-12-10 22:57:52.241772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.712 [2024-12-10 22:57:52.241803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.255888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.712 [2024-12-10 22:57:52.256065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.712 [2024-12-10 22:57:52.256092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.269828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.712 [2024-12-10 22:57:52.270102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.712 [2024-12-10 22:57:52.270131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.712 [2024-12-10 22:57:52.283924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.713 [2024-12-10 22:57:52.284201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.713 [2024-12-10 22:57:52.284246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.713 [2024-12-10 22:57:52.297816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.713 [2024-12-10 22:57:52.298034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.713 [2024-12-10 22:57:52.298061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.713 [2024-12-10 22:57:52.311828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.713 [2024-12-10 22:57:52.312102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.713 [2024-12-10 22:57:52.312140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.713 [2024-12-10 22:57:52.325797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1208e40) with pdu=0x200016ee99d8 00:26:44.713 [2024-12-10 22:57:52.327417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.713 [2024-12-10 22:57:52.327446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.713 19626.00 IOPS, 76.66 MiB/s 00:26:44.713 Latency(us) 00:26:44.713 [2024-12-10T21:57:52.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.713 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:44.713 nvme0n1 : 2.01 19625.63 76.66 0.00 0.00 6508.03 2779.21 15437.37 00:26:44.713 [2024-12-10T21:57:52.445Z] =================================================================================================================== 00:26:44.713 [2024-12-10T21:57:52.445Z] Total : 19625.63 76.66 0.00 0.00 6508.03 2779.21 15437.37 00:26:44.713 { 00:26:44.713 "results": [ 00:26:44.713 { 00:26:44.713 "job": "nvme0n1", 00:26:44.713 "core_mask": "0x2", 00:26:44.713 "workload": "randwrite", 00:26:44.713 "status": "finished", 00:26:44.713 "queue_depth": 128, 00:26:44.713 "io_size": 4096, 00:26:44.713 "runtime": 2.00656, 00:26:44.713 "iops": 19625.627940355633, 00:26:44.713 "mibps": 76.66260914201419, 00:26:44.713 "io_failed": 0, 00:26:44.713 "io_timeout": 0, 00:26:44.713 "avg_latency_us": 6508.031720708012, 00:26:44.713 "min_latency_us": 2779.211851851852, 00:26:44.713 "max_latency_us": 15437.368888888888 00:26:44.713 } 00:26:44.713 ], 00:26:44.713 "core_count": 1 00:26:44.713 } 00:26:44.713 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:44.713 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:44.713 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:44.713 | .driver_specific 00:26:44.713 | .nvme_error 00:26:44.713 | .status_code 00:26:44.713 | .command_transient_transport_error' 00:26:44.713 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 172878 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 172878 ']' 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 172878 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172878 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172878' 00:26:44.973 killing process with pid 172878 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 172878 00:26:44.973 Received shutdown signal, test time was about 2.000000 seconds 00:26:44.973 00:26:44.973 Latency(us) 00:26:44.973 [2024-12-10T21:57:52.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.973 [2024-12-10T21:57:52.705Z] =================================================================================================================== 00:26:44.973 [2024-12-10T21:57:52.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.973 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 172878 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=173292 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 173292 /var/tmp/bperf.sock 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 173292 ']' 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.232 22:57:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.232 [2024-12-10 22:57:52.929985] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:45.232 [2024-12-10 22:57:52.930076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173292 ] 00:26:45.232 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:45.232 Zero copy mechanism will not be used. 00:26:45.491 [2024-12-10 22:57:52.997839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.491 [2024-12-10 22:57:53.059403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.491 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.491 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:45.491 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.491 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.058 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:46.058 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.058 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.058 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.058 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.058 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.317 nvme0n1 00:26:46.317 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:46.317 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.317 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.317 22:57:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.317 22:57:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:46.317 22:57:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:46.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:46.578 Zero copy mechanism will not be used. 00:26:46.578 Running I/O for 2 seconds... 00:26:46.578 [2024-12-10 22:57:54.118572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.578 [2024-12-10 22:57:54.118677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.578 [2024-12-10 22:57:54.118717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.578 [2024-12-10 22:57:54.124596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.578 [2024-12-10 22:57:54.124681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.578 [2024-12-10 22:57:54.124717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.578 [2024-12-10 22:57:54.129909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.578 [2024-12-10 22:57:54.129985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.130016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.135515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.135601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.135632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.141239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.141327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.141360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.146780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.146874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.146907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.152304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.152378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.152408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.158024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.158101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.158133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.163916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.164001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.164031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.169010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.169087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.169119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.174182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.174265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.174295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.179132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.179218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.179247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.184205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.184285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.184314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.189330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.189402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.189431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.194427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.194496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.194531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.200066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.200138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.200167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.206990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.207181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.207224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.213277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.213381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.213412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.219948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.220151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.220180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.226359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.226470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.226500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.232522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.232665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.232696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.238384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.238751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.238783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.245009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.245344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.245376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.251027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.251322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.251364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.256138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.256396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.256431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.260416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.260662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.260695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.265806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.579 [2024-12-10 22:57:54.266048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.579 [2024-12-10 22:57:54.266080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.579 [2024-12-10 22:57:54.270669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.270911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.270942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.275100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.275340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.275371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.279396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.279626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.279655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.283623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.283839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.283879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.287785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.288000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.288041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.291988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.292170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.292200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.296268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.296479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.296506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.300560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.300744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.300773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.580 [2024-12-10 22:57:54.305219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.580 [2024-12-10 22:57:54.305429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.580 [2024-12-10 22:57:54.305475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.841 [2024-12-10 22:57:54.309715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.309910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.309939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.314349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.314599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.314630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.319072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.319270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.319299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.323661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.323846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.323878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.328869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.329084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.329122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.333280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.333494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.333523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.337554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.337729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.337757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.341740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.341926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.341956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.345955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.346171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.346201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.350138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.350328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.350357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.354926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.355186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.355218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.359953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.360208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.360239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.365622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.365902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.365934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.371127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.371339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.371373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.375565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.375774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.375808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.380128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.380371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.380412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.384756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.384979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.385013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.389130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.389342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.389374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.393536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.393743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.393777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.398049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.398249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.398281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.402559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.402759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.402791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.407092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.407333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.407364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.411505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.411715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.411746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.416051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.416262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.842 [2024-12-10 22:57:54.416307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.842 [2024-12-10 22:57:54.420514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.842 [2024-12-10 22:57:54.420735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.420769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.424956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.425156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.425187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.429515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.429729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.429761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.434117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.434316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.434348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.438580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.438785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.438818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.443158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.443358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.443389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.447606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.447846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.447876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.452220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.452465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.452496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.457352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.457660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.457693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.462472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.462733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.468634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.468843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.468877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.473181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.473349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.473383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.477474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.477689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.477722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.481891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.482083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.482114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.486171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.486351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.486384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.491019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.491265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.491319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.496105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.496340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.496371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.501638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.501932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.501962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.507417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.507648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.507679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.512383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.512589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.512621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.516726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.516892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.516924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.520894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.521062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.521091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.525120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.525300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.525329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.529472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.529648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.529677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.533579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.533754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.533785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.843 [2024-12-10 22:57:54.537850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.843 [2024-12-10 22:57:54.538017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.843 [2024-12-10 22:57:54.538046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.844 [2024-12-10 22:57:54.542120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.844 [2024-12-10 22:57:54.542286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.844 [2024-12-10 22:57:54.542316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.844 [2024-12-10 22:57:54.546380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.844 [2024-12-10 22:57:54.546555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.844 [2024-12-10 22:57:54.546595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.844 [2024-12-10 22:57:54.550610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.844 [2024-12-10 22:57:54.550779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.844 [2024-12-10 22:57:54.550812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.844 [2024-12-10 22:57:54.554820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.844 [2024-12-10 22:57:54.554985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.844 [2024-12-10 22:57:54.555015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.844 [2024-12-10 22:57:54.559061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.844 [2024-12-10 22:57:54.559228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.844 [2024-12-10 22:57:54.559257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.844 [2024-12-10 22:57:54.563276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.844 [2024-12-10 22:57:54.563441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.844 [2024-12-10 22:57:54.563470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.844 [2024-12-10 22:57:54.567507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:46.844 [2024-12-10 22:57:54.567678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.844 [2024-12-10 22:57:54.567709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.571882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.572048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.572077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.576159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.576322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.576351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.580465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.580644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.580673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.584732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.584896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.584925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.588928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.589092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.589121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.593132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.593345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.593374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.597326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.597491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.597521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.601552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.601716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.601745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.605774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.605967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.606003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.610044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.610205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.106 [2024-12-10 22:57:54.610234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.106 [2024-12-10 22:57:54.614489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.106 [2024-12-10 22:57:54.614665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.614698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.619461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.619741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.619773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.624767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.624960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.624989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.630808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.631000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.631038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.635162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.635327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.635358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.639462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.639654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.639685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.643874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.644040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.644070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.648417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.648598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.648629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.652850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.653023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.653052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.657284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.657451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.657481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.661754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.661975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.662024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.666202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.666452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.666484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.670516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.670693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.670723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.674833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.674999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.675029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.679277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.679474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.679503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.684957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.685137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.685180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.689625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.689792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.689821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.694033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.694254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.694287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.698476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.698665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.702872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.703105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.703137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.707235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.707400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.707433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.711455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.711649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.711678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.716346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.716531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.716568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.721426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.721731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.721763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.727481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.107 [2024-12-10 22:57:54.727653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.107 [2024-12-10 22:57:54.727691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.107 [2024-12-10 22:57:54.732305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.732435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.732464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.736580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.736672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.736700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.740899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.741009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.741041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.745374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.745490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.745519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.749726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.749795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.749823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.754267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.754360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.754389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.759463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.759608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.759637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.764538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.764683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.764714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.769706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.769878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.769910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.774765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.774954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.774984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.779839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.780040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.780072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.785837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.786029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.786058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.790418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.790541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.790578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.794731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.794888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.794917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.799257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.799405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.799438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.804319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.804455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.804484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.808715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.808805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.808834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.812940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.813029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.813059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.817348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.817479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.817508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.822376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.822563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.822594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.827500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.827637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.827665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.108 [2024-12-10 22:57:54.833475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.108 [2024-12-10 22:57:54.833597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.108 [2024-12-10 22:57:54.833627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.369 [2024-12-10 22:57:54.838386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.369 [2024-12-10 22:57:54.838481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-12-10 22:57:54.838510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.369 [2024-12-10 22:57:54.842862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.369 [2024-12-10 22:57:54.843017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-12-10 22:57:54.843047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.369 [2024-12-10 22:57:54.847209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.369 [2024-12-10 22:57:54.847314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-12-10 22:57:54.847345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.369 [2024-12-10 22:57:54.851707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.369 [2024-12-10 22:57:54.851841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-12-10 22:57:54.851878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.369 [2024-12-10 22:57:54.856249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.369 [2024-12-10 22:57:54.856375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-12-10 22:57:54.856404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.369 [2024-12-10 22:57:54.860679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.369 [2024-12-10 22:57:54.860766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-12-10 22:57:54.860794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.369 [2024-12-10 22:57:54.865181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.865314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.865343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.869624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.869758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.869788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.874060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.874201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.874230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.878466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.878611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.878642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.882864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.882998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.883029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.887426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.887570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.887610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.892011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.892098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.892129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.896206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.896298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.896329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.900716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.900864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.900894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.905873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.906053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.906083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.910918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.911150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.911182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.916741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.916845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.916875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.921614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.921727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.921756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.926067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.926251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.926281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.930469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.930623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.930652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.934964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.935094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.935123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.939472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.939625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.939653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.943828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.943944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.943973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.948236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.948377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.948406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.953036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.953207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.953236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.958132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.958300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.958345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.963872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.963978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.964007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.969145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.969251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.969280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.973463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.973631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.973666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.977856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.370 [2024-12-10 22:57:54.977983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-12-10 22:57:54.978013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.370 [2024-12-10 22:57:54.982187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:54.982350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:54.982380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:54.986667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:54.986834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:54.986865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:54.991126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:54.991285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:54.991315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:54.995385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:54.995561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:54.995590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:54.999836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.000013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.000042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.004111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.004217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.004246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.008355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.008464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.008493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.012554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.012678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.012707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.016745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.016865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.016893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.020982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.021100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.021129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.025085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.025195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.025228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.029374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.029476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.029508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.033543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.033668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.033698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.037752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.037877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.037922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.041998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.042115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.042147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.046229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.046349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.046377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.050392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.050512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.050542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.054630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.054743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.054773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.058834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.058947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.058975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.062973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.063090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.063120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.067203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.067326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.067357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.071452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.071576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.071610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.075685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.075804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.075834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.080025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.080129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.080160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.084293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.084409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.371 [2024-12-10 22:57:55.084446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.371 [2024-12-10 22:57:55.088429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.371 [2024-12-10 22:57:55.088554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.372 [2024-12-10 22:57:55.088582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.372 [2024-12-10 22:57:55.092668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.372 [2024-12-10 22:57:55.092840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.372 [2024-12-10 22:57:55.092870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.372 [2024-12-10 22:57:55.097300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.372 [2024-12-10 22:57:55.097452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.372 [2024-12-10 22:57:55.097482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.102352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.102522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.102571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.107659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.107861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.107892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.114873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 6538.00 IOPS, 817.25 MiB/s [2024-12-10T21:57:55.363Z] [2024-12-10 22:57:55.115081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.115113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.120126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.120278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.120306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.126377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.126489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.126523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.132511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.132733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.132765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.138694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.138872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.138907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.144594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.144853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.144895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.149664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.149804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.149847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.154193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.154431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.154461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.159399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.159628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.159660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.164382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.164598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.164630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.169514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.169667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.169698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.174599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.174751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.174788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.179743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.179947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-12-10 22:57:55.179978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.631 [2024-12-10 22:57:55.184775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.631 [2024-12-10 22:57:55.185036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.185067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.190941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.191114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.191160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.196032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.196195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.196226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.201173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.201415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.201447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.206223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.206365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.206396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.211281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.211460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.211491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.216393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.216536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.216576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.221510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.221742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.221772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.226519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.226681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.226715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.231615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.231763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.231792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.236814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.236988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.237022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.241949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.242169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.242199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.247184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.247373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.247406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.252329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.252483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.252513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.257367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.257578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.257609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.262447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.262634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.262664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.267482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.267703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.267734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.272607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.272801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.272831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.277502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.277615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.277649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.282988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.283142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.283186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.288240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.288390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.288420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.293311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.293488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.293518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.298183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.298311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.298341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.302428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.302539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.302576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.306918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.307073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.307106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.312466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.632 [2024-12-10 22:57:55.312569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.632 [2024-12-10 22:57:55.312601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.632 [2024-12-10 22:57:55.316675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.316779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.316808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.321087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.321185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.321215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.325292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.325412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.325441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.329570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.329670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.329700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.333825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.333932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.333962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.337984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.338076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.338104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.342238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.342342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.342371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.346496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.346631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.346661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.350983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.351069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.351097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.355153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.355266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.355296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.633 [2024-12-10 22:57:55.359460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.633 [2024-12-10 22:57:55.359578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.633 [2024-12-10 22:57:55.359609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.363758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.363847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.363882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.368109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.368197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.368226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.372329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.372438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.372467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.376485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.376599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.376629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.380700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.380802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.380832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.384952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.385060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.385088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.389117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.389214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.389245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.393277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.393404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.393445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.397580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.397702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.893 [2024-12-10 22:57:55.397742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.893 [2024-12-10 22:57:55.402231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.893 [2024-12-10 22:57:55.402345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.402377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.406478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.406591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.406620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.410701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.410801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.410836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.414994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.415091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.415121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.419259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.419427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.419466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.424110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.424300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.424330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.429570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.429744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.429775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.435416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.435591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.435624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.440021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.440177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.440207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.444319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.444479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.444508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.448938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.449039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.449067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.454138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.454251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.454281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.458340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.458443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.458477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.462646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.462748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.462778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.466879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.467000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.467030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.471920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.472082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.472113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.476976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.477160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.477191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.483289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.483495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.483529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.487748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.487908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.487939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.492009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.492135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.492166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.496510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.496644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.496678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.500823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.500969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.501002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.505265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.505422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.505453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.509814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.509951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.509983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.894 [2024-12-10 22:57:55.514036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.894 [2024-12-10 22:57:55.514143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.894 [2024-12-10 22:57:55.514173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.518235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.518330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.518362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.522860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.522952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.522980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.527845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.527973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.528002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.532253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.532366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.532396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.537095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.537270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.537302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.542169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.542331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.542383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.547595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.547796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.547827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.553120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.553297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.553339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.557322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.557423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.557452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.561690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.561801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.561846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.566107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.566263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.566294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.570541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.570681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.570710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.575002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.575139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.575167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.579481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.579650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.579679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.583927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.584081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.584110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.588402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.588597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.588629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.592763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.592876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.592906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.597266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.597356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.597384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.601662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.601751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.601779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.606331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.606571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.606602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.611448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.611623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.611652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.895 [2024-12-10 22:57:55.617257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:47.895 [2024-12-10 22:57:55.617388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.895 [2024-12-10 22:57:55.617419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.622513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.622659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.626861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.627039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.627070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.631360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.631525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.631576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.636460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.636573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.636605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.641067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.641171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.641211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.645638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.645739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.645780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.650089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.650188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.650224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.654640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.654731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.654760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.659241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.659383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.659415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.664537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.664704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.664741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.669981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.670095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.670128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.674203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.674346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.674392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.678575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.678705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.678736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.682948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.683099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.683129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.687273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.687359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.687388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.691729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.691855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.691885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.696253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.696428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.696461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.700610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.700700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.700728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.704899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.705016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.705047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.709520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.190 [2024-12-10 22:57:55.709707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.190 [2024-12-10 22:57:55.709741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.190 [2024-12-10 22:57:55.714724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.714939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.714983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.719938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.720119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.720149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.725851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.725962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.725993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.730236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.730346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.730380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.734705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.734799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.734828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.739115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.739216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.739247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.743372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.743554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.743585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.748290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.748485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.748515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.753256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.753424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.753454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.758983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.759156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.759186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.763916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.764012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.764045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.768249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.768386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.768416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.772725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.772820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.772849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.777071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.777197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.777228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.781435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.781590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.781620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.785765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.785873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.785912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.790005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.790115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.790160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.794403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.794504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.794535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.799461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.799632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.799663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.804438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.804602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.804633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.809929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.810039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.810069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.815502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.815652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.815685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.820655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.820848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.820881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.825732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.825934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.825964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.830800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.191 [2024-12-10 22:57:55.831021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.191 [2024-12-10 22:57:55.831051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.191 [2024-12-10 22:57:55.835790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.835973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.836004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.840868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.841068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.841114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.845964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.846156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.846204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.851035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.851216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.851247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.856126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.856297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.856327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.861254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.861379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.861411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.866302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.866442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.866474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.872201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.872429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.872460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.877696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.877863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.877894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.192 [2024-12-10 22:57:55.882238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.192 [2024-12-10 22:57:55.882344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.192 [2024-12-10 22:57:55.882373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.886622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.886772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.886801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.891348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.891472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.891510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.895845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.895989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.896028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.901082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.901255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.901286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.906698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.906838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.906867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.911753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.911906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.911937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.916422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.916540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.916591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.921179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.471 [2024-12-10 22:57:55.921367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.471 [2024-12-10 22:57:55.921396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.471 [2024-12-10 22:57:55.926668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.926798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.926827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.932070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.932281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.932310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.937986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.938134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.938178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.943072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.943215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.943244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.948193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.948315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.948343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.952774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.952967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.952994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.958013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.958192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.958220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.964404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.964582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.964616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.969556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.969664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.969693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.974675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.974860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.974903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.979799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.979968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.979998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.984843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.985021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.985050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.989935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.990097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.990127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.994603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.994679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.994709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:55.999514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:55.999680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:55.999710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.004951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.005104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.005133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.010911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.011088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.011117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.015654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.015790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.019869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.019981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.020008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.024438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.024531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.024571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.029440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.029517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.029554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.034106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.034176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.034206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.038911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.038987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.039016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.043380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.043470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.043501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.047607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.047702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.472 [2024-12-10 22:57:56.047740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.472 [2024-12-10 22:57:56.051718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.472 [2024-12-10 22:57:56.051800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.051831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.055928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.056002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.056032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.060096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.060180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.060209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.064256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.064337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.064368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.068386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.068464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.068493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.072533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.072611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.072641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.076687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.076762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.076793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.080892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.080962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.080991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.084972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.085057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.085091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.089113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.089192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.089220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.093329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.093413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.093445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.097510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.097599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.097628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.101758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.101831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.101864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.106140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.106326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.106355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.473 [2024-12-10 22:57:56.111194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.111402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.111431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.473 6507.50 IOPS, 813.44 MiB/s [2024-12-10T21:57:56.205Z] [2024-12-10 22:57:56.117580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1209180) with pdu=0x200016eff3c8 00:26:48.473 [2024-12-10 22:57:56.117753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-10 22:57:56.117783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.473 00:26:48.473 Latency(us) 00:26:48.473 [2024-12-10T21:57:56.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.473 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:48.473 nvme0n1 : 2.00 6502.95 812.87 0.00 0.00 2452.86 1917.53 11262.48 00:26:48.473 [2024-12-10T21:57:56.205Z] =================================================================================================================== 00:26:48.473 [2024-12-10T21:57:56.205Z] Total : 6502.95 812.87 0.00 0.00 2452.86 1917.53 11262.48 00:26:48.473 { 00:26:48.473 "results": [ 00:26:48.473 { 00:26:48.473 "job": "nvme0n1", 00:26:48.473 "core_mask": "0x2", 00:26:48.473 "workload": "randwrite", 00:26:48.473 "status": "finished", 00:26:48.473 "queue_depth": 16, 00:26:48.473 "io_size": 131072, 00:26:48.473 "runtime": 2.003859, 00:26:48.473 "iops": 6502.952553048892, 00:26:48.473 "mibps": 812.8690691311115, 00:26:48.473 "io_failed": 0, 00:26:48.473 "io_timeout": 0, 00:26:48.473 "avg_latency_us": 2452.859011871975, 00:26:48.473 "min_latency_us": 1917.5348148148148, 00:26:48.473 "max_latency_us": 11262.482962962962 00:26:48.473 } 00:26:48.473 ], 00:26:48.473 "core_count": 1 00:26:48.473 } 00:26:48.473 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:48.473 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:48.473 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:48.473 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:48.473 | .driver_specific 00:26:48.473 | .nvme_error 00:26:48.473 | .status_code 00:26:48.473 | .command_transient_transport_error' 00:26:48.732 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:26:48.732 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 173292 00:26:48.732 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 173292 ']' 00:26:48.732 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 173292 00:26:48.732 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:48.732 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.732 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173292 00:26:48.733 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:48.733 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:48.733 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173292' 00:26:48.733 killing process with pid 173292 00:26:48.733 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 173292 00:26:48.733 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.733 00:26:48.733 Latency(us) 00:26:48.733 [2024-12-10T21:57:56.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.733 [2024-12-10T21:57:56.465Z] =================================================================================================================== 00:26:48.733 [2024-12-10T21:57:56.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.733 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 173292 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 171916 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 171916 ']' 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 171916 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171916 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171916' 00:26:48.991 killing process with pid 171916 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 171916 00:26:48.991 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 171916 00:26:49.250 00:26:49.250 real 0m15.775s 00:26:49.250 user 0m31.529s 00:26:49.250 sys 0m4.392s 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.250 ************************************ 00:26:49.250 END TEST nvmf_digest_error 00:26:49.250 ************************************ 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.250 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.250 rmmod nvme_tcp 00:26:49.511 rmmod nvme_fabrics 00:26:49.511 rmmod nvme_keyring 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 171916 ']' 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 171916 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 171916 ']' 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 171916 00:26:49.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (171916) - No such process 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 171916 is not found' 00:26:49.511 Process with pid 171916 is not found 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.511 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.418 00:26:51.418 real 0m36.216s 00:26:51.418 user 1m3.976s 00:26:51.418 sys 0m10.513s 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.418 ************************************ 00:26:51.418 END TEST nvmf_digest 00:26:51.418 ************************************ 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.418 ************************************ 00:26:51.418 START TEST nvmf_bdevperf 00:26:51.418 ************************************ 00:26:51.418 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:51.678 * Looking for test storage... 00:26:51.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.678 --rc genhtml_branch_coverage=1 00:26:51.678 --rc genhtml_function_coverage=1 00:26:51.678 --rc genhtml_legend=1 00:26:51.678 --rc geninfo_all_blocks=1 00:26:51.678 --rc geninfo_unexecuted_blocks=1 00:26:51.678 00:26:51.678 ' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.678 --rc genhtml_branch_coverage=1 00:26:51.678 --rc genhtml_function_coverage=1 00:26:51.678 --rc genhtml_legend=1 00:26:51.678 --rc geninfo_all_blocks=1 00:26:51.678 --rc geninfo_unexecuted_blocks=1 00:26:51.678 00:26:51.678 ' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.678 --rc genhtml_branch_coverage=1 00:26:51.678 --rc genhtml_function_coverage=1 00:26:51.678 --rc genhtml_legend=1 00:26:51.678 --rc geninfo_all_blocks=1 00:26:51.678 --rc geninfo_unexecuted_blocks=1 00:26:51.678 00:26:51.678 ' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.678 --rc genhtml_branch_coverage=1 00:26:51.678 --rc genhtml_function_coverage=1 00:26:51.678 --rc genhtml_legend=1 00:26:51.678 --rc geninfo_all_blocks=1 00:26:51.678 --rc geninfo_unexecuted_blocks=1 00:26:51.678 00:26:51.678 ' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.678 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.679 22:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:54.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:54.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:54.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:54.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.216 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:26:54.217 00:26:54.217 --- 10.0.0.2 ping statistics --- 00:26:54.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.217 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:26:54.217 00:26:54.217 --- 10.0.0.1 ping statistics --- 00:26:54.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.217 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=175776 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 175776 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 175776 ']' 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.217 [2024-12-10 22:58:01.636746] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:54.217 [2024-12-10 22:58:01.636823] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.217 [2024-12-10 22:58:01.708002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:54.217 [2024-12-10 22:58:01.761813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.217 [2024-12-10 22:58:01.761870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.217 [2024-12-10 22:58:01.761893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.217 [2024-12-10 22:58:01.761904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.217 [2024-12-10 22:58:01.761913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.217 [2024-12-10 22:58:01.763681] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.217 [2024-12-10 22:58:01.763748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.217 [2024-12-10 22:58:01.763755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.217 [2024-12-10 22:58:01.911154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.217 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.477 Malloc0 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.477 [2024-12-10 22:58:01.969303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:54.477 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:54.478 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:54.478 { 00:26:54.478 "params": { 00:26:54.478 "name": "Nvme$subsystem", 00:26:54.478 "trtype": "$TEST_TRANSPORT", 00:26:54.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.478 "adrfam": "ipv4", 00:26:54.478 "trsvcid": "$NVMF_PORT", 00:26:54.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.478 "hdgst": ${hdgst:-false}, 00:26:54.478 "ddgst": ${ddgst:-false} 00:26:54.478 }, 00:26:54.478 "method": "bdev_nvme_attach_controller" 00:26:54.478 } 00:26:54.478 EOF 00:26:54.478 )") 00:26:54.478 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:54.478 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:54.478 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:54.478 22:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:54.478 "params": { 00:26:54.478 "name": "Nvme1", 00:26:54.478 "trtype": "tcp", 00:26:54.478 "traddr": "10.0.0.2", 00:26:54.478 "adrfam": "ipv4", 00:26:54.478 "trsvcid": "4420", 00:26:54.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:54.478 "hdgst": false, 00:26:54.478 "ddgst": false 00:26:54.478 }, 00:26:54.478 "method": "bdev_nvme_attach_controller" 00:26:54.478 }' 00:26:54.478 [2024-12-10 22:58:02.021298] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:54.478 [2024-12-10 22:58:02.021384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175801 ] 00:26:54.478 [2024-12-10 22:58:02.091038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.478 [2024-12-10 22:58:02.151602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.737 Running I/O for 1 seconds... 00:26:55.673 8504.00 IOPS, 33.22 MiB/s 00:26:55.673 Latency(us) 00:26:55.673 [2024-12-10T21:58:03.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.673 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:55.673 Verification LBA range: start 0x0 length 0x4000 00:26:55.673 Nvme1n1 : 1.01 8536.13 33.34 0.00 0.00 14924.48 3155.44 14854.83 00:26:55.673 [2024-12-10T21:58:03.405Z] =================================================================================================================== 00:26:55.673 [2024-12-10T21:58:03.405Z] Total : 8536.13 33.34 0.00 0.00 14924.48 3155.44 14854.83 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=176013 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:55.933 { 00:26:55.933 "params": { 00:26:55.933 "name": "Nvme$subsystem", 00:26:55.933 "trtype": "$TEST_TRANSPORT", 00:26:55.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.933 "adrfam": "ipv4", 00:26:55.933 "trsvcid": "$NVMF_PORT", 00:26:55.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.933 "hdgst": ${hdgst:-false}, 00:26:55.933 "ddgst": ${ddgst:-false} 00:26:55.933 }, 00:26:55.933 "method": "bdev_nvme_attach_controller" 00:26:55.933 } 00:26:55.933 EOF 00:26:55.933 )") 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:55.933 22:58:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:55.933 "params": { 00:26:55.933 "name": "Nvme1", 00:26:55.933 "trtype": "tcp", 00:26:55.933 "traddr": "10.0.0.2", 00:26:55.933 "adrfam": "ipv4", 00:26:55.933 "trsvcid": "4420", 00:26:55.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:55.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:55.933 "hdgst": false, 00:26:55.933 "ddgst": false 00:26:55.933 }, 00:26:55.933 "method": "bdev_nvme_attach_controller" 00:26:55.933 }' 00:26:55.933 [2024-12-10 22:58:03.614972] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:55.933 [2024-12-10 22:58:03.615064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176013 ] 00:26:56.195 [2024-12-10 22:58:03.683496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.195 [2024-12-10 22:58:03.742279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.456 Running I/O for 15 seconds... 00:26:58.774 8387.00 IOPS, 32.76 MiB/s [2024-12-10T21:58:06.769Z] 8570.50 IOPS, 33.48 MiB/s [2024-12-10T21:58:06.769Z] 22:58:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 175776 00:26:59.037 22:58:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:59.037 [2024-12-10 22:58:06.579799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-10 22:58:06.579861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.579906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-10 22:58:06.579924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.579941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-10 22:58:06.579955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.579971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-10 22:58:06.579986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-10 22:58:06.580044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-10 22:58:06.580074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-10 22:58:06.580118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.037 [2024-12-10 22:58:06.580400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.037 [2024-12-10 22:58:06.580415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-10 22:58:06.580644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.580987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.580999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.038 [2024-12-10 22:58:06.581477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.038 [2024-12-10 22:58:06.581490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.581984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.581997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.039 [2024-12-10 22:58:06.582565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.039 [2024-12-10 22:58:06.582580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.582976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.582989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.040 [2024-12-10 22:58:06.583446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.040 [2024-12-10 22:58:06.583672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.040 [2024-12-10 22:58:06.583687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7b0e0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.583705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:59.041 [2024-12-10 22:58:06.583717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:59.041 [2024-12-10 22:58:06.583728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39024 len:8 PRP1 0x0 PRP2 0x0 00:26:59.041 [2024-12-10 22:58:06.583747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.041 [2024-12-10 22:58:06.588150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.588231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.588949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.588979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.588996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.589253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.589465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.589483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.589499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.589512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.601900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.602332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.602378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.602395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.602652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.602886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.602920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.602933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.602945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.615623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.616004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.616033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.616049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.616259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.616470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.616490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.616503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.616515] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.629007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.629375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.629404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.629420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.629662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.629903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.629937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.629949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.629961] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.642416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.642853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.642883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.642899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.643138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.643329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.643348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.643360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.643372] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.655742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.656129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.656156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.656171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.656406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.656662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.656684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.656697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.656710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.668999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.669348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.669376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.669392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.669642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.669859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.669883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.669897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.669924] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.682067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.682414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.682443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.682459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.682709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.682937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.041 [2024-12-10 22:58:06.682957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.041 [2024-12-10 22:58:06.682969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.041 [2024-12-10 22:58:06.682980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.041 [2024-12-10 22:58:06.695203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.041 [2024-12-10 22:58:06.695577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-12-10 22:58:06.695605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.041 [2024-12-10 22:58:06.695621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.041 [2024-12-10 22:58:06.695841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.041 [2024-12-10 22:58:06.696048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.042 [2024-12-10 22:58:06.696067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.042 [2024-12-10 22:58:06.696079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.042 [2024-12-10 22:58:06.696090] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.042 [2024-12-10 22:58:06.708290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.042 [2024-12-10 22:58:06.708586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-12-10 22:58:06.708627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.042 [2024-12-10 22:58:06.708643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.042 [2024-12-10 22:58:06.708842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.042 [2024-12-10 22:58:06.709063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.042 [2024-12-10 22:58:06.709082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.042 [2024-12-10 22:58:06.709095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.042 [2024-12-10 22:58:06.709111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.042 [2024-12-10 22:58:06.721398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.042 [2024-12-10 22:58:06.721715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-12-10 22:58:06.721742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.042 [2024-12-10 22:58:06.721758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.042 [2024-12-10 22:58:06.721955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.042 [2024-12-10 22:58:06.722177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.042 [2024-12-10 22:58:06.722196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.042 [2024-12-10 22:58:06.722208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.042 [2024-12-10 22:58:06.722220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.042 [2024-12-10 22:58:06.734586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.042 [2024-12-10 22:58:06.734969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-12-10 22:58:06.734997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.042 [2024-12-10 22:58:06.735013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.042 [2024-12-10 22:58:06.735231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.042 [2024-12-10 22:58:06.735438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.042 [2024-12-10 22:58:06.735457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.042 [2024-12-10 22:58:06.735469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.042 [2024-12-10 22:58:06.735481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.042 [2024-12-10 22:58:06.747754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.042 [2024-12-10 22:58:06.748162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-12-10 22:58:06.748190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.042 [2024-12-10 22:58:06.748206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.042 [2024-12-10 22:58:06.748445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.042 [2024-12-10 22:58:06.748696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.042 [2024-12-10 22:58:06.748717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.042 [2024-12-10 22:58:06.748730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.042 [2024-12-10 22:58:06.748742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.042 [2024-12-10 22:58:06.760957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.042 [2024-12-10 22:58:06.761317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-12-10 22:58:06.761347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.042 [2024-12-10 22:58:06.761363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.042 [2024-12-10 22:58:06.761622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.042 [2024-12-10 22:58:06.761863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.042 [2024-12-10 22:58:06.761883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.042 [2024-12-10 22:58:06.761897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.042 [2024-12-10 22:58:06.761909] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.302 [2024-12-10 22:58:06.774260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.302 [2024-12-10 22:58:06.774679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-12-10 22:58:06.774711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.302 [2024-12-10 22:58:06.774728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.302 [2024-12-10 22:58:06.774975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.302 [2024-12-10 22:58:06.775185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.302 [2024-12-10 22:58:06.775205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.302 [2024-12-10 22:58:06.775218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.302 [2024-12-10 22:58:06.775230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.302 [2024-12-10 22:58:06.787308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.302 [2024-12-10 22:58:06.787719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-12-10 22:58:06.787748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.302 [2024-12-10 22:58:06.787764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.302 [2024-12-10 22:58:06.788004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.302 [2024-12-10 22:58:06.788210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.302 [2024-12-10 22:58:06.788228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.302 [2024-12-10 22:58:06.788241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.302 [2024-12-10 22:58:06.788253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.302 [2024-12-10 22:58:06.800461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.302 [2024-12-10 22:58:06.800846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-12-10 22:58:06.800874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.302 [2024-12-10 22:58:06.800890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.302 [2024-12-10 22:58:06.801117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.302 [2024-12-10 22:58:06.801323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.302 [2024-12-10 22:58:06.801342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.302 [2024-12-10 22:58:06.801354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.302 [2024-12-10 22:58:06.801366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.302 [2024-12-10 22:58:06.813456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.302 [2024-12-10 22:58:06.813829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-12-10 22:58:06.813874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.302 [2024-12-10 22:58:06.813891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.302 [2024-12-10 22:58:06.814126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.302 [2024-12-10 22:58:06.814331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.302 [2024-12-10 22:58:06.814350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.302 [2024-12-10 22:58:06.814362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.814374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.826456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.826896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.826941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.826958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.827195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.827401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.827420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.827432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.827443] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.839627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.840032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.840061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.840078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.840322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.840561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.840602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.840617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.840630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.853282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.853656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.853685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.853702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.853939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.854177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.854197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.854225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.854239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.866541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.866880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.866909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.866925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.867150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.867355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.867374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.867386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.867398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.879780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.880212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.880239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.880256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.880495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.880741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.880762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.880776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.880793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.893026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.893343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.893370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.893386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.893631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.893848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.893867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.893880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.893906] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.906295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.906661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.906690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.906707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.906952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.907148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.907167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.907180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.907192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.920108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.920452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.920481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.920498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.920724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.920966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.920986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.920999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.921012] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.933718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.303 [2024-12-10 22:58:06.934121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-12-10 22:58:06.934154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.303 [2024-12-10 22:58:06.934171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.303 [2024-12-10 22:58:06.934393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.303 [2024-12-10 22:58:06.934627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.303 [2024-12-10 22:58:06.934650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.303 [2024-12-10 22:58:06.934664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.303 [2024-12-10 22:58:06.934677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.303 [2024-12-10 22:58:06.947377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.304 [2024-12-10 22:58:06.947742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-12-10 22:58:06.947772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.304 [2024-12-10 22:58:06.947788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.304 [2024-12-10 22:58:06.948045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.304 [2024-12-10 22:58:06.948261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.304 [2024-12-10 22:58:06.948280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.304 [2024-12-10 22:58:06.948294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.304 [2024-12-10 22:58:06.948306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.304 [2024-12-10 22:58:06.961011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.304 [2024-12-10 22:58:06.961404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-12-10 22:58:06.961434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.304 [2024-12-10 22:58:06.961450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.304 [2024-12-10 22:58:06.961678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.304 [2024-12-10 22:58:06.961909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.304 [2024-12-10 22:58:06.961929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.304 [2024-12-10 22:58:06.961957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.304 [2024-12-10 22:58:06.961969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.304 [2024-12-10 22:58:06.974693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.304 [2024-12-10 22:58:06.975141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-12-10 22:58:06.975185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.304 [2024-12-10 22:58:06.975202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.304 [2024-12-10 22:58:06.975451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.304 [2024-12-10 22:58:06.975692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.304 [2024-12-10 22:58:06.975715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.304 [2024-12-10 22:58:06.975729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.304 [2024-12-10 22:58:06.975743] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.304 [2024-12-10 22:58:06.988266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.304 [2024-12-10 22:58:06.988609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-12-10 22:58:06.988639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.304 [2024-12-10 22:58:06.988655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.304 [2024-12-10 22:58:06.988887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.304 [2024-12-10 22:58:06.989105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.304 [2024-12-10 22:58:06.989124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.304 [2024-12-10 22:58:06.989138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.304 [2024-12-10 22:58:06.989165] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.304 [2024-12-10 22:58:07.001627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.304 [2024-12-10 22:58:07.002036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-12-10 22:58:07.002064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.304 [2024-12-10 22:58:07.002079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.304 [2024-12-10 22:58:07.002276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.304 [2024-12-10 22:58:07.002500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.304 [2024-12-10 22:58:07.002519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.304 [2024-12-10 22:58:07.002557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.304 [2024-12-10 22:58:07.002573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.304 [2024-12-10 22:58:07.014943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.304 [2024-12-10 22:58:07.015352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-12-10 22:58:07.015381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.304 [2024-12-10 22:58:07.015397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.304 [2024-12-10 22:58:07.015654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.304 [2024-12-10 22:58:07.015901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.304 [2024-12-10 22:58:07.015925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.304 [2024-12-10 22:58:07.015938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.304 [2024-12-10 22:58:07.015950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.304 [2024-12-10 22:58:07.028297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.304 [2024-12-10 22:58:07.028641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-12-10 22:58:07.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.304 [2024-12-10 22:58:07.028688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.304 [2024-12-10 22:58:07.028934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.304 [2024-12-10 22:58:07.029130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.304 [2024-12-10 22:58:07.029149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.304 [2024-12-10 22:58:07.029164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.304 [2024-12-10 22:58:07.029177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.564 [2024-12-10 22:58:07.041504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.564 [2024-12-10 22:58:07.041902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.564 [2024-12-10 22:58:07.041932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.564 [2024-12-10 22:58:07.041948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.564 [2024-12-10 22:58:07.042188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.564 [2024-12-10 22:58:07.042378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.564 [2024-12-10 22:58:07.042397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.564 [2024-12-10 22:58:07.042410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.042422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.054595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.054947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.054974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.054990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.055201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.055407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.055425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.055438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.055454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.067626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.067982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.068024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.068040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.068258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.068464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.068483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.068495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.068507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.080757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.081101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.081129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.081144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.081361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.081593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.081613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.081625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.081637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.093905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.094315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.094345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.094362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.094614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.094824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.094845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.094858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.094871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 7048.00 IOPS, 27.53 MiB/s [2024-12-10T21:58:07.297Z] [2024-12-10 22:58:07.108457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.108786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.108830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.108846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.109086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.109291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.109310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.109323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.109334] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.121488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.121857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.121901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.121917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.122148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.122338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.122356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.122368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.122380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.134708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.135060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.135089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.135106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.135344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.135574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.135594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.135607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.135634] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.147741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.148148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.148176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.148192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.148433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.148688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.148717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.148731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.565 [2024-12-10 22:58:07.148744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.565 [2024-12-10 22:58:07.160920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.565 [2024-12-10 22:58:07.161267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.565 [2024-12-10 22:58:07.161295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.565 [2024-12-10 22:58:07.161311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.565 [2024-12-10 22:58:07.161558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.565 [2024-12-10 22:58:07.161773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.565 [2024-12-10 22:58:07.161793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.565 [2024-12-10 22:58:07.161806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.161818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.173981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.174292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.174319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.174335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.174565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.174767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.174787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.174800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.174812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.187101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.187394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.187435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.187451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.187712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.187914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.187939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.187953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.187965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.200134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.200482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.200510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.200526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.200792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.201012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.201032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.201044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.201056] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.213278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.213624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.213653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.213669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.213907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.214113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.214132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.214144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.214156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.226278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.226627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.226656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.226672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.226909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.227114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.227133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.227146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.227162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.239359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.239776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.239804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.239820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.240053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.240259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.240277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.240290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.240301] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.252424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.252830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.252880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.252897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.253143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.253333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.253352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.253364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.253376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.265498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.265930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.265958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.265974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.266211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.266417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.266436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.266449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.566 [2024-12-10 22:58:07.266460] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.566 [2024-12-10 22:58:07.278568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.566 [2024-12-10 22:58:07.278950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.566 [2024-12-10 22:58:07.278978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.566 [2024-12-10 22:58:07.278994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.566 [2024-12-10 22:58:07.279224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.566 [2024-12-10 22:58:07.279429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.566 [2024-12-10 22:58:07.279449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.566 [2024-12-10 22:58:07.279461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.567 [2024-12-10 22:58:07.279472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.567 [2024-12-10 22:58:07.291813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.567 [2024-12-10 22:58:07.292163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.567 [2024-12-10 22:58:07.292193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.567 [2024-12-10 22:58:07.292210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.567 [2024-12-10 22:58:07.292447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.567 [2024-12-10 22:58:07.292717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.567 [2024-12-10 22:58:07.292739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.567 [2024-12-10 22:58:07.292753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.567 [2024-12-10 22:58:07.292765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.826 [2024-12-10 22:58:07.304899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.826 [2024-12-10 22:58:07.305194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.826 [2024-12-10 22:58:07.305236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.826 [2024-12-10 22:58:07.305252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.826 [2024-12-10 22:58:07.305450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.826 [2024-12-10 22:58:07.305699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.826 [2024-12-10 22:58:07.305720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.305733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.305745] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.317963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.318307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.318335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.318351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.318598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.318795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.318815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.318828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.318840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.331128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.331474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.331502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.331518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.331785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.332009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.332028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.332040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.332052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.344266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.344711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.344740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.344757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.345004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.345219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.345239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.345252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.345264] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.357559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.357882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.357910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.357926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.358144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.358351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.358374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.358387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.358399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.370818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.371159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.371188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.371204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.371422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.371664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.371686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.371699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.371712] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.384098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.384507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.384557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.384575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.384814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.385036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.385055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.385068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.385079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.397185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.397591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.397619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.397635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.397867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.398073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.398092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.398104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.398120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.410376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.410730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.410758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.410774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.411011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.827 [2024-12-10 22:58:07.411202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.827 [2024-12-10 22:58:07.411221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.827 [2024-12-10 22:58:07.411234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.827 [2024-12-10 22:58:07.411245] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.827 [2024-12-10 22:58:07.423630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.827 [2024-12-10 22:58:07.424022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.827 [2024-12-10 22:58:07.424051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.827 [2024-12-10 22:58:07.424067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.827 [2024-12-10 22:58:07.424304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.424509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.424528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.424540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.424577] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.436768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.437114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.437142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.437158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.437395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.437630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.437651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.437664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.437676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.449918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.450333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.450362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.450378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.450626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.450837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.450871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.450884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.450896] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.462957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.463322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.463364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.463380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.463608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.463820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.463840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.463852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.463879] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.476118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.476524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.476559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.476576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.476813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.477018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.477037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.477050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.477062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.489213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.489557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.489585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.489601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.489837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.490028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.490047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.490059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.490071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.502538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.502923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.502952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.502968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.503207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.503397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.503416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.503428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.503440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.515779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.516115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.516143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.516159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.516380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.516613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.516633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.516646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.516658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.528875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.529220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.529247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.529263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.529480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.529718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.529744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.828 [2024-12-10 22:58:07.529758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.828 [2024-12-10 22:58:07.529770] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.828 [2024-12-10 22:58:07.542038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.828 [2024-12-10 22:58:07.542447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.828 [2024-12-10 22:58:07.542475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:26:59.828 [2024-12-10 22:58:07.542491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:26:59.828 [2024-12-10 22:58:07.542761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:26:59.828 [2024-12-10 22:58:07.542992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.828 [2024-12-10 22:58:07.543010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.829 [2024-12-10 22:58:07.543023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.829 [2024-12-10 22:58:07.543035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.829 [2024-12-10 22:58:07.555315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.090 [2024-12-10 22:58:07.555730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.090 [2024-12-10 22:58:07.555761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.090 [2024-12-10 22:58:07.555784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.090 [2024-12-10 22:58:07.556037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.090 [2024-12-10 22:58:07.556255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.090 [2024-12-10 22:58:07.556275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.090 [2024-12-10 22:58:07.556288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.090 [2024-12-10 22:58:07.556300] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.090 [2024-12-10 22:58:07.568368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.090 [2024-12-10 22:58:07.568784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.090 [2024-12-10 22:58:07.568813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.090 [2024-12-10 22:58:07.568829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.090 [2024-12-10 22:58:07.569060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.090 [2024-12-10 22:58:07.569266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.090 [2024-12-10 22:58:07.569285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.090 [2024-12-10 22:58:07.569298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.090 [2024-12-10 22:58:07.569314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.090 [2024-12-10 22:58:07.581761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.090 [2024-12-10 22:58:07.582205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.090 [2024-12-10 22:58:07.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.090 [2024-12-10 22:58:07.582249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.090 [2024-12-10 22:58:07.582465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.090 [2024-12-10 22:58:07.582717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.090 [2024-12-10 22:58:07.582738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.090 [2024-12-10 22:58:07.582751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.090 [2024-12-10 22:58:07.582763] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.090 [2024-12-10 22:58:07.595125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.090 [2024-12-10 22:58:07.595553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.090 [2024-12-10 22:58:07.595584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.090 [2024-12-10 22:58:07.595600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.090 [2024-12-10 22:58:07.595832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.090 [2024-12-10 22:58:07.596063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.090 [2024-12-10 22:58:07.596084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.090 [2024-12-10 22:58:07.596098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.090 [2024-12-10 22:58:07.596112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.090 [2024-12-10 22:58:07.608391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.090 [2024-12-10 22:58:07.608788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.090 [2024-12-10 22:58:07.608818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.090 [2024-12-10 22:58:07.608835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.090 [2024-12-10 22:58:07.609090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.090 [2024-12-10 22:58:07.609295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.090 [2024-12-10 22:58:07.609315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.090 [2024-12-10 22:58:07.609328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.090 [2024-12-10 22:58:07.609340] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.090 [2024-12-10 22:58:07.621703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.090 [2024-12-10 22:58:07.622142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.090 [2024-12-10 22:58:07.622196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.090 [2024-12-10 22:58:07.622213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.090 [2024-12-10 22:58:07.622462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.090 [2024-12-10 22:58:07.622702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.622724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.622738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.622750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.634949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.635296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.635326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.635342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.635593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.635795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.635816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.635829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.635841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.647987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.648363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.648392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.648408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.648625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.648845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.648866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.648879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.648891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.661335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.661673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.661703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.661720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.661982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.662176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.662196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.662208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.662220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.674837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.675166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.675195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.675212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.675432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.675692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.675716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.675730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.675744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.688136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.688434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.688524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.688541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.688767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.688975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.688994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.689007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.689019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.701368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.701755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.701784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.701800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.702019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.702225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.702250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.702263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.702275] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.714906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.715319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.715349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.715366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.715619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.715862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.715884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.715897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.715910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.728143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.728483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.728511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.728542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.728785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.729017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.729037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.091 [2024-12-10 22:58:07.729050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.091 [2024-12-10 22:58:07.729062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.091 [2024-12-10 22:58:07.741403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.091 [2024-12-10 22:58:07.741813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.091 [2024-12-10 22:58:07.741843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.091 [2024-12-10 22:58:07.741860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.091 [2024-12-10 22:58:07.742101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.091 [2024-12-10 22:58:07.742312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.091 [2024-12-10 22:58:07.742332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.092 [2024-12-10 22:58:07.742345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.092 [2024-12-10 22:58:07.742361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.092 [2024-12-10 22:58:07.754642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.092 [2024-12-10 22:58:07.755057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.092 [2024-12-10 22:58:07.755086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.092 [2024-12-10 22:58:07.755102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.092 [2024-12-10 22:58:07.755341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.092 [2024-12-10 22:58:07.755573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.092 [2024-12-10 22:58:07.755607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.092 [2024-12-10 22:58:07.755621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.092 [2024-12-10 22:58:07.755634] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.092 [2024-12-10 22:58:07.767851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.092 [2024-12-10 22:58:07.768258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.092 [2024-12-10 22:58:07.768287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.092 [2024-12-10 22:58:07.768303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.092 [2024-12-10 22:58:07.768539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.092 [2024-12-10 22:58:07.768780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.092 [2024-12-10 22:58:07.768801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.092 [2024-12-10 22:58:07.768826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.092 [2024-12-10 22:58:07.768839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.092 [2024-12-10 22:58:07.781010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.092 [2024-12-10 22:58:07.781437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.092 [2024-12-10 22:58:07.781466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.092 [2024-12-10 22:58:07.781498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.092 [2024-12-10 22:58:07.781764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.092 [2024-12-10 22:58:07.781973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.092 [2024-12-10 22:58:07.781993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.092 [2024-12-10 22:58:07.782006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.092 [2024-12-10 22:58:07.782017] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.092 [2024-12-10 22:58:07.794325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.092 [2024-12-10 22:58:07.794653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.092 [2024-12-10 22:58:07.794681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.092 [2024-12-10 22:58:07.794697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.092 [2024-12-10 22:58:07.794916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.092 [2024-12-10 22:58:07.795122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.092 [2024-12-10 22:58:07.795142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.092 [2024-12-10 22:58:07.795156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.092 [2024-12-10 22:58:07.795168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.092 [2024-12-10 22:58:07.807468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.092 [2024-12-10 22:58:07.807864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.092 [2024-12-10 22:58:07.807893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.092 [2024-12-10 22:58:07.807909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.092 [2024-12-10 22:58:07.808127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.092 [2024-12-10 22:58:07.808334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.092 [2024-12-10 22:58:07.808354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.092 [2024-12-10 22:58:07.808366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.092 [2024-12-10 22:58:07.808378] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.820820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.821197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.354 [2024-12-10 22:58:07.821241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.354 [2024-12-10 22:58:07.821257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.354 [2024-12-10 22:58:07.821476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.354 [2024-12-10 22:58:07.821712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.354 [2024-12-10 22:58:07.821734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.354 [2024-12-10 22:58:07.821747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.354 [2024-12-10 22:58:07.821759] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.834024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.834373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.354 [2024-12-10 22:58:07.834403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.354 [2024-12-10 22:58:07.834419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.354 [2024-12-10 22:58:07.834676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.354 [2024-12-10 22:58:07.834902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.354 [2024-12-10 22:58:07.834922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.354 [2024-12-10 22:58:07.834934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.354 [2024-12-10 22:58:07.834946] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.847173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.847578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.354 [2024-12-10 22:58:07.847631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.354 [2024-12-10 22:58:07.847649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.354 [2024-12-10 22:58:07.847893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.354 [2024-12-10 22:58:07.848121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.354 [2024-12-10 22:58:07.848157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.354 [2024-12-10 22:58:07.848171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.354 [2024-12-10 22:58:07.848184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.860494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.860899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.354 [2024-12-10 22:58:07.860945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.354 [2024-12-10 22:58:07.860961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.354 [2024-12-10 22:58:07.861198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.354 [2024-12-10 22:58:07.861387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.354 [2024-12-10 22:58:07.861406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.354 [2024-12-10 22:58:07.861419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.354 [2024-12-10 22:58:07.861431] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.873736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.874069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.354 [2024-12-10 22:58:07.874098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.354 [2024-12-10 22:58:07.874114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.354 [2024-12-10 22:58:07.874333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.354 [2024-12-10 22:58:07.874567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.354 [2024-12-10 22:58:07.874606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.354 [2024-12-10 22:58:07.874620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.354 [2024-12-10 22:58:07.874633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.886797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.887210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.354 [2024-12-10 22:58:07.887240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.354 [2024-12-10 22:58:07.887257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.354 [2024-12-10 22:58:07.887493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.354 [2024-12-10 22:58:07.887736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.354 [2024-12-10 22:58:07.887758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.354 [2024-12-10 22:58:07.887771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.354 [2024-12-10 22:58:07.887784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.900052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.900397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.354 [2024-12-10 22:58:07.900426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.354 [2024-12-10 22:58:07.900442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.354 [2024-12-10 22:58:07.900715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.354 [2024-12-10 22:58:07.900912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.354 [2024-12-10 22:58:07.900933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.354 [2024-12-10 22:58:07.900945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.354 [2024-12-10 22:58:07.900957] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.354 [2024-12-10 22:58:07.913091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.354 [2024-12-10 22:58:07.913464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:07.913492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:07.913507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:07.913774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:07.913982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:07.914002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:07.914015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:07.914032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:07.926365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:07.926705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:07.926735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:07.926753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:07.927004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:07.927196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:07.927216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:07.927229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:07.927241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:07.939506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:07.939834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:07.939877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:07.939892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:07.940105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:07.940310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:07.940330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:07.940342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:07.940354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:07.952570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:07.952928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:07.952969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:07.952985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:07.953204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:07.953408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:07.953427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:07.953439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:07.953450] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:07.965789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:07.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:07.966185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:07.966201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:07.966433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:07.966685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:07.966707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:07.966722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:07.966734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:07.978911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:07.979256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:07.979284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:07.979316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:07.979571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:07.979784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:07.979805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:07.979819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:07.979831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:07.992102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:07.992524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:07.992577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:07.992596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:07.992852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:07.993042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:07.993062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:07.993074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:07.993088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:08.005559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:08.005972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:08.006001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:08.006018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:08.006249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:08.006460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:08.006478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:08.006491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:08.006504] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:08.019234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:08.019594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:08.019624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:08.019642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:08.019873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.355 [2024-12-10 22:58:08.020090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.355 [2024-12-10 22:58:08.020111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.355 [2024-12-10 22:58:08.020124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.355 [2024-12-10 22:58:08.020153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.355 [2024-12-10 22:58:08.032864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.355 [2024-12-10 22:58:08.033317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.355 [2024-12-10 22:58:08.033367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.355 [2024-12-10 22:58:08.033385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.355 [2024-12-10 22:58:08.033644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.356 [2024-12-10 22:58:08.033865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.356 [2024-12-10 22:58:08.033903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.356 [2024-12-10 22:58:08.033917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.356 [2024-12-10 22:58:08.033930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.356 [2024-12-10 22:58:08.046541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.356 [2024-12-10 22:58:08.046875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.356 [2024-12-10 22:58:08.046906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.356 [2024-12-10 22:58:08.046924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.356 [2024-12-10 22:58:08.047161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.356 [2024-12-10 22:58:08.047379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.356 [2024-12-10 22:58:08.047405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.356 [2024-12-10 22:58:08.047420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.356 [2024-12-10 22:58:08.047433] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.356 [2024-12-10 22:58:08.060099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.356 [2024-12-10 22:58:08.060487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.356 [2024-12-10 22:58:08.060517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.356 [2024-12-10 22:58:08.060534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.356 [2024-12-10 22:58:08.060763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.356 [2024-12-10 22:58:08.061002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.356 [2024-12-10 22:58:08.061022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.356 [2024-12-10 22:58:08.061051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.356 [2024-12-10 22:58:08.061066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.356 [2024-12-10 22:58:08.073630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.356 [2024-12-10 22:58:08.073976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.356 [2024-12-10 22:58:08.074005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.356 [2024-12-10 22:58:08.074022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.356 [2024-12-10 22:58:08.074247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.356 [2024-12-10 22:58:08.074464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.356 [2024-12-10 22:58:08.074485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.356 [2024-12-10 22:58:08.074498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.356 [2024-12-10 22:58:08.074511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 [2024-12-10 22:58:08.087003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.618 [2024-12-10 22:58:08.087433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.618 [2024-12-10 22:58:08.087465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.618 [2024-12-10 22:58:08.087492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.618 [2024-12-10 22:58:08.087751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.618 [2024-12-10 22:58:08.087988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.618 [2024-12-10 22:58:08.088010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.618 [2024-12-10 22:58:08.088023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.618 [2024-12-10 22:58:08.088047] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 [2024-12-10 22:58:08.100516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.618 [2024-12-10 22:58:08.100898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.618 [2024-12-10 22:58:08.100950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.618 [2024-12-10 22:58:08.100967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.618 [2024-12-10 22:58:08.101211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.618 [2024-12-10 22:58:08.101470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.618 [2024-12-10 22:58:08.101492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.618 [2024-12-10 22:58:08.101507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.618 [2024-12-10 22:58:08.101521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 5286.00 IOPS, 20.65 MiB/s [2024-12-10T21:58:08.350Z] [2024-12-10 22:58:08.113987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.618 [2024-12-10 22:58:08.114418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.618 [2024-12-10 22:58:08.114467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.618 [2024-12-10 22:58:08.114484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.618 [2024-12-10 22:58:08.114726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.618 [2024-12-10 22:58:08.114949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.618 [2024-12-10 22:58:08.114969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.618 [2024-12-10 22:58:08.114981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.618 [2024-12-10 22:58:08.114993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 [2024-12-10 22:58:08.127348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.618 [2024-12-10 22:58:08.127717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.618 [2024-12-10 22:58:08.127748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.618 [2024-12-10 22:58:08.127764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.618 [2024-12-10 22:58:08.128018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.618 [2024-12-10 22:58:08.128207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.618 [2024-12-10 22:58:08.128226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.618 [2024-12-10 22:58:08.128238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.618 [2024-12-10 22:58:08.128250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 [2024-12-10 22:58:08.140723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.618 [2024-12-10 22:58:08.141141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.618 [2024-12-10 22:58:08.141179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.618 [2024-12-10 22:58:08.141214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.618 [2024-12-10 22:58:08.141446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.618 [2024-12-10 22:58:08.141686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.618 [2024-12-10 22:58:08.141707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.618 [2024-12-10 22:58:08.141720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.618 [2024-12-10 22:58:08.141732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 [2024-12-10 22:58:08.153915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.618 [2024-12-10 22:58:08.154275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.618 [2024-12-10 22:58:08.154325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.618 [2024-12-10 22:58:08.154342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.618 [2024-12-10 22:58:08.154604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.618 [2024-12-10 22:58:08.154806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.618 [2024-12-10 22:58:08.154827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.618 [2024-12-10 22:58:08.154856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.618 [2024-12-10 22:58:08.154869] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 [2024-12-10 22:58:08.167223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.618 [2024-12-10 22:58:08.167583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.618 [2024-12-10 22:58:08.167629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.618 [2024-12-10 22:58:08.167646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.618 [2024-12-10 22:58:08.167896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.618 [2024-12-10 22:58:08.168088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.618 [2024-12-10 22:58:08.168108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.618 [2024-12-10 22:58:08.168121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.618 [2024-12-10 22:58:08.168133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.618 [2024-12-10 22:58:08.180410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.180781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.180809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.180825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.181043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.181248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.181268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.181281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.181292] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.193528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.193895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.193925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.193941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.194180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.194375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.194395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.194410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.194422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.206751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.207145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.207173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.207189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.207408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.207663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.207685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.207698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.207711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.219928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.220273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.220303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.220319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.220570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.220786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.220812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.220826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.220854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.232991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.233399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.233427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.233442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.233706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.233923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.233957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.233971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.233983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.246168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.246513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.246566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.246585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.246818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.247022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.247052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.247065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.247078] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.259179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.259585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.259614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.259630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.259869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.260074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.260094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.260106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.260123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.272279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.272688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.272718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.272734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.272970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.273175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.273196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.273208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.273219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.285569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.285890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.285918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.619 [2024-12-10 22:58:08.285935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.619 [2024-12-10 22:58:08.286154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.619 [2024-12-10 22:58:08.286360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.619 [2024-12-10 22:58:08.286380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.619 [2024-12-10 22:58:08.286394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.619 [2024-12-10 22:58:08.286405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.619 [2024-12-10 22:58:08.298676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.619 [2024-12-10 22:58:08.299032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.619 [2024-12-10 22:58:08.299062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.620 [2024-12-10 22:58:08.299079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.620 [2024-12-10 22:58:08.299316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.620 [2024-12-10 22:58:08.299542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.620 [2024-12-10 22:58:08.299574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.620 [2024-12-10 22:58:08.299588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.620 [2024-12-10 22:58:08.299615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.620 [2024-12-10 22:58:08.311856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.620 [2024-12-10 22:58:08.312173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.620 [2024-12-10 22:58:08.312203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.620 [2024-12-10 22:58:08.312219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.620 [2024-12-10 22:58:08.312438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.620 [2024-12-10 22:58:08.312690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.620 [2024-12-10 22:58:08.312711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.620 [2024-12-10 22:58:08.312724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.620 [2024-12-10 22:58:08.312737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.620 [2024-12-10 22:58:08.324880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.620 [2024-12-10 22:58:08.325287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.620 [2024-12-10 22:58:08.325316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.620 [2024-12-10 22:58:08.325332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.620 [2024-12-10 22:58:08.325582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.620 [2024-12-10 22:58:08.325778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.620 [2024-12-10 22:58:08.325796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.620 [2024-12-10 22:58:08.325809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.620 [2024-12-10 22:58:08.325823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.620 [2024-12-10 22:58:08.337936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.620 [2024-12-10 22:58:08.338345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.620 [2024-12-10 22:58:08.338374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.620 [2024-12-10 22:58:08.338390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.620 [2024-12-10 22:58:08.338643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.620 [2024-12-10 22:58:08.338869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.620 [2024-12-10 22:58:08.338889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.620 [2024-12-10 22:58:08.338902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.620 [2024-12-10 22:58:08.338914] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.880 [2024-12-10 22:58:08.350965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.880 [2024-12-10 22:58:08.351392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.880 [2024-12-10 22:58:08.351442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.880 [2024-12-10 22:58:08.351460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.880 [2024-12-10 22:58:08.351730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.880 [2024-12-10 22:58:08.351964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.880 [2024-12-10 22:58:08.351992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.880 [2024-12-10 22:58:08.352008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.880 [2024-12-10 22:58:08.352021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.880 [2024-12-10 22:58:08.364233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.880 [2024-12-10 22:58:08.364585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.880 [2024-12-10 22:58:08.364616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.880 [2024-12-10 22:58:08.364633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.880 [2024-12-10 22:58:08.364878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.880 [2024-12-10 22:58:08.365083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.880 [2024-12-10 22:58:08.365115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.880 [2024-12-10 22:58:08.365128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.365141] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.377409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.377816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.377846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.377862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.378096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.378301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.378321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.378334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.378346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.390538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.390890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.390920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.390936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.391175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.391380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.391405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.391418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.391430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.403738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.404177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.404208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.404224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.404463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.404708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.404730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.404743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.404756] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.416980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.417329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.417357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.417373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.417618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.417840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.417861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.417875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.417902] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.430130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.430479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.430507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.430524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.430807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.431030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.431051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.431063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.431080] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.443301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.443665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.443695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.443713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.443951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.444155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.444175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.444188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.444199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.456381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.456799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.456828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.456844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.457082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.457286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.457306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.457318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.457330] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.469666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.470059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.470088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.470104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.470341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.470574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.470596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.470610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.470622] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.482936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.881 [2024-12-10 22:58:08.483348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.881 [2024-12-10 22:58:08.483377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.881 [2024-12-10 22:58:08.483394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.881 [2024-12-10 22:58:08.483630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.881 [2024-12-10 22:58:08.483850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.881 [2024-12-10 22:58:08.483870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.881 [2024-12-10 22:58:08.483884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.881 [2024-12-10 22:58:08.483895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.881 [2024-12-10 22:58:08.496050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.496406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.496449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.496465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.496737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.496954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.496975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.496988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.497001] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.509177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.509539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.509593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.509611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.509868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.510060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.510080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.510092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.510104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.522320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.522703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.522733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.522749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.522981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.523186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.523206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.523218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.523230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.535589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.535997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.536026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.536042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.536280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.536486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.536506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.536519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.536531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.548645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.548991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.549019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.549035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.549272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.549477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.549497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.549510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.549521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.561705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.562112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.562140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.562157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.562394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.562628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.562654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.562668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.562680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.574883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.575224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.575252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.575267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.575484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.575720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.575740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.575752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.575764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.588107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.588398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.588485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.588501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.588781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.589018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.589037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.589049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.589061] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.882 [2024-12-10 22:58:08.601576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.882 [2024-12-10 22:58:08.601995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.882 [2024-12-10 22:58:08.602032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:00.882 [2024-12-10 22:58:08.602065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:00.882 [2024-12-10 22:58:08.602302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:00.882 [2024-12-10 22:58:08.602517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.882 [2024-12-10 22:58:08.602565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.882 [2024-12-10 22:58:08.602581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.882 [2024-12-10 22:58:08.602599] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.143 [2024-12-10 22:58:08.614933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.143 [2024-12-10 22:58:08.615336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.143 [2024-12-10 22:58:08.615387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.143 [2024-12-10 22:58:08.615403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.143 [2024-12-10 22:58:08.615660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.143 [2024-12-10 22:58:08.615895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.143 [2024-12-10 22:58:08.615928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.143 [2024-12-10 22:58:08.615941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.143 [2024-12-10 22:58:08.615954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.143 [2024-12-10 22:58:08.628312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.143 [2024-12-10 22:58:08.628748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.143 [2024-12-10 22:58:08.628802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.143 [2024-12-10 22:58:08.628821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.143 [2024-12-10 22:58:08.629064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.143 [2024-12-10 22:58:08.629255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.629274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.629286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.629298] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.641476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.641906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.641958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.641974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.642216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.642406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.642425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.642437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.642449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.654668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.655020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.655048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.655064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.655296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.655502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.655521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.655533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.655554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.667862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.668289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.668319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.668335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.668586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.668788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.668808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.668822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.668848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.680953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.681349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.681397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.681414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.681661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.681863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.681882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.681910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.681923] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.694128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.694597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.694626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.694643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.694898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.695089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.695108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.695120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.695131] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.707278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.707591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.707619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.707635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.707856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.708063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.708082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.708095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.708106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.720534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.720907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.720935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.720950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.721166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.721371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.721390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.721402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.721413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.733699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.734127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.734155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.734171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.734408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.734640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.734668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.144 [2024-12-10 22:58:08.734682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.144 [2024-12-10 22:58:08.734694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.144 [2024-12-10 22:58:08.746940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.144 [2024-12-10 22:58:08.747252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.144 [2024-12-10 22:58:08.747325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.144 [2024-12-10 22:58:08.747340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.144 [2024-12-10 22:58:08.747576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.144 [2024-12-10 22:58:08.747798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.144 [2024-12-10 22:58:08.747819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.747848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.747860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.760049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.760395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.760424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.760440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.760688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.760918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.760936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.760949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.760960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.773129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.773477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.773506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.773522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.773787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.774011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.774030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.774042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.774058] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.786316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.786662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.786691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.786707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.786945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.787152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.787171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.787183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.787195] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.799385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.799817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.799845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.799861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.800100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.800306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.800325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.800338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.800349] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.812509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.812896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.812939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.812956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.813173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.813379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.813398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.813410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.813422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.825697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.826129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.826165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.826181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.826428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.826664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.826684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.826697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.826708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.839009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.839412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.839439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.839454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.839703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.839917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.839936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.839948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.839959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.852106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.852563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.852614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.852632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.852863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.853097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.853117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.853130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.853142] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.145 [2024-12-10 22:58:08.865387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.145 [2024-12-10 22:58:08.865829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.145 [2024-12-10 22:58:08.865873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.145 [2024-12-10 22:58:08.865889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.145 [2024-12-10 22:58:08.866123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.145 [2024-12-10 22:58:08.866314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.145 [2024-12-10 22:58:08.866333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.145 [2024-12-10 22:58:08.866345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.145 [2024-12-10 22:58:08.866357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.405 [2024-12-10 22:58:08.878950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.405 [2024-12-10 22:58:08.879332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.879367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.879383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.879632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.879864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.879883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.879896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.879908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.892341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.892711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.892741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.892760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.893005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.893200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.893219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.893231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.893244] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.905689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.906051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.906079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.906094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.906313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.906552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.906579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.906594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.906606] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.919025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.919444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.919473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.919497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.919780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.920004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.920024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.920036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.920047] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.932208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.932592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.932628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.932644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.932848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.933063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.933083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.933096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.933108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.945562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.945951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.945984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.946001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.946248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.946438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.946457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.946469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.946485] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.958937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.959276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.959304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.959319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.959538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.959777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.959797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.959810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.959822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.971981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.972396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.972423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.972438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.972707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.972911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.972944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.972957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.972969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.985136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.406 [2024-12-10 22:58:08.985482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.406 [2024-12-10 22:58:08.985511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.406 [2024-12-10 22:58:08.985527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.406 [2024-12-10 22:58:08.985792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.406 [2024-12-10 22:58:08.986000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.406 [2024-12-10 22:58:08.986019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.406 [2024-12-10 22:58:08.986032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.406 [2024-12-10 22:58:08.986044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.406 [2024-12-10 22:58:08.998176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:08.998539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:08.998597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:08.998613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:08.998831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:08.999036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:08.999055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:08.999068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:08.999079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.011194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.011492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.011588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.011606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.011837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.012042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.012062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.012074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.012086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.024244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.024661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.024690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.024707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.024945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.025151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.025169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.025182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.025193] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.037372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.037736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.037764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.037780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.037996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.038187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.038206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.038218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.038230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.050409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.050833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.050861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.050877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.051117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.051322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.051341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.051353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.051365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.063443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.063879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.063924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.063940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.064176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.064380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.064399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.064411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.064423] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.076510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.076861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.076889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.076905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.077122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.077327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.077350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.077364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.077375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.089584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.089878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.089921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.089937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.090154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.090361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.090380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.090392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.090403] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 [2024-12-10 22:58:09.102702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.103118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.103155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.407 [2024-12-10 22:58:09.103171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.407 [2024-12-10 22:58:09.103417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.407 [2024-12-10 22:58:09.103659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.407 [2024-12-10 22:58:09.103681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.407 [2024-12-10 22:58:09.103695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.407 [2024-12-10 22:58:09.103707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.407 4228.80 IOPS, 16.52 MiB/s [2024-12-10T21:58:09.139Z] [2024-12-10 22:58:09.116043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.407 [2024-12-10 22:58:09.116392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.407 [2024-12-10 22:58:09.116420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.408 [2024-12-10 22:58:09.116437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.408 [2024-12-10 22:58:09.116703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.408 [2024-12-10 22:58:09.116915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.408 [2024-12-10 22:58:09.116934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.408 [2024-12-10 22:58:09.116947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.408 [2024-12-10 22:58:09.116962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.408 [2024-12-10 22:58:09.129233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.408 [2024-12-10 22:58:09.129589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.408 [2024-12-10 22:58:09.129620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.408 [2024-12-10 22:58:09.129637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.408 [2024-12-10 22:58:09.129893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.408 [2024-12-10 22:58:09.130091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.408 [2024-12-10 22:58:09.130110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.408 [2024-12-10 22:58:09.130123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.408 [2024-12-10 22:58:09.130135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.668 [2024-12-10 22:58:09.142934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.668 [2024-12-10 22:58:09.143274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.668 [2024-12-10 22:58:09.143303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.668 [2024-12-10 22:58:09.143320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.668 [2024-12-10 22:58:09.143554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.668 [2024-12-10 22:58:09.143791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.668 [2024-12-10 22:58:09.143813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.668 [2024-12-10 22:58:09.143828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.668 [2024-12-10 22:58:09.143852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.668 [2024-12-10 22:58:09.156594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.668 [2024-12-10 22:58:09.156981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.668 [2024-12-10 22:58:09.157035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.668 [2024-12-10 22:58:09.157051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.668 [2024-12-10 22:58:09.157291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.668 [2024-12-10 22:58:09.157542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.668 [2024-12-10 22:58:09.157573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.668 [2024-12-10 22:58:09.157588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.668 [2024-12-10 22:58:09.157601] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.668 [2024-12-10 22:58:09.170269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.668 [2024-12-10 22:58:09.170669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.668 [2024-12-10 22:58:09.170699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.668 [2024-12-10 22:58:09.170716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.668 [2024-12-10 22:58:09.170958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.668 [2024-12-10 22:58:09.171189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.668 [2024-12-10 22:58:09.171209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.668 [2024-12-10 22:58:09.171222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.668 [2024-12-10 22:58:09.171234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.668 [2024-12-10 22:58:09.183901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.668 [2024-12-10 22:58:09.184261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.668 [2024-12-10 22:58:09.184290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.668 [2024-12-10 22:58:09.184306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.668 [2024-12-10 22:58:09.184563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.668 [2024-12-10 22:58:09.184785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.668 [2024-12-10 22:58:09.184807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.668 [2024-12-10 22:58:09.184822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.668 [2024-12-10 22:58:09.184835] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.668 [2024-12-10 22:58:09.197449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.668 [2024-12-10 22:58:09.197777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.668 [2024-12-10 22:58:09.197807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.668 [2024-12-10 22:58:09.197824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.668 [2024-12-10 22:58:09.198056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.668 [2024-12-10 22:58:09.198262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.668 [2024-12-10 22:58:09.198281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.668 [2024-12-10 22:58:09.198293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.668 [2024-12-10 22:58:09.198305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.668 [2024-12-10 22:58:09.211116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.211515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.211553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.211572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.211797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.212017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.212036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.212049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.212061] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.224806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.225242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.225274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.225291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.225537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.225782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.225804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.225819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.225832] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.238227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.238677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.238706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.238723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.238954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.239160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.239179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.239191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.239202] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.251446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.251810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.251853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.251870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.252106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.252296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.252319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.252332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.252343] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.264722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.265084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.265112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.265128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.265365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.265614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.265635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.265648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.265660] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.277731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.278074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.278102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.278117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.278349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.278578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.278598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.278610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.278621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.290765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.291180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.291208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.291229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.291458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.291680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.291701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.291714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.291730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.304035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.304416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.304476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.304492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.304769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.304972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.305005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.305018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.305030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.317118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.317594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.317622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.317638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.669 [2024-12-10 22:58:09.317867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.669 [2024-12-10 22:58:09.318073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.669 [2024-12-10 22:58:09.318092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.669 [2024-12-10 22:58:09.318104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.669 [2024-12-10 22:58:09.318115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.669 [2024-12-10 22:58:09.330242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.669 [2024-12-10 22:58:09.330587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.669 [2024-12-10 22:58:09.330615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.669 [2024-12-10 22:58:09.330631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.670 [2024-12-10 22:58:09.330848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.670 [2024-12-10 22:58:09.331053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.670 [2024-12-10 22:58:09.331072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.670 [2024-12-10 22:58:09.331084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.670 [2024-12-10 22:58:09.331095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.670 [2024-12-10 22:58:09.343248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.670 [2024-12-10 22:58:09.343607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.670 [2024-12-10 22:58:09.343636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.670 [2024-12-10 22:58:09.343653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.670 [2024-12-10 22:58:09.343890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.670 [2024-12-10 22:58:09.344081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.670 [2024-12-10 22:58:09.344099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.670 [2024-12-10 22:58:09.344111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.670 [2024-12-10 22:58:09.344122] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.670 [2024-12-10 22:58:09.356414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.670 [2024-12-10 22:58:09.356802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.670 [2024-12-10 22:58:09.356831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.670 [2024-12-10 22:58:09.356847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.670 [2024-12-10 22:58:09.357083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.670 [2024-12-10 22:58:09.357333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.670 [2024-12-10 22:58:09.357354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.670 [2024-12-10 22:58:09.357368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.670 [2024-12-10 22:58:09.357382] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.670 [2024-12-10 22:58:09.369783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.670 [2024-12-10 22:58:09.370164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.670 [2024-12-10 22:58:09.370192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.670 [2024-12-10 22:58:09.370208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.670 [2024-12-10 22:58:09.370445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.670 [2024-12-10 22:58:09.370682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.670 [2024-12-10 22:58:09.370702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.670 [2024-12-10 22:58:09.370716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.670 [2024-12-10 22:58:09.370727] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.670 [2024-12-10 22:58:09.383022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.670 [2024-12-10 22:58:09.383437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.670 [2024-12-10 22:58:09.383466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.670 [2024-12-10 22:58:09.383487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.670 [2024-12-10 22:58:09.383752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.670 [2024-12-10 22:58:09.383984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.670 [2024-12-10 22:58:09.384003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.670 [2024-12-10 22:58:09.384015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.670 [2024-12-10 22:58:09.384027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.670 [2024-12-10 22:58:09.396289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.930 [2024-12-10 22:58:09.396657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.930 [2024-12-10 22:58:09.396686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.930 [2024-12-10 22:58:09.396703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.930 [2024-12-10 22:58:09.396938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.930 [2024-12-10 22:58:09.397142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.930 [2024-12-10 22:58:09.397164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.930 [2024-12-10 22:58:09.397176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.930 [2024-12-10 22:58:09.397188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.930 [2024-12-10 22:58:09.409361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.930 [2024-12-10 22:58:09.409689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.930 [2024-12-10 22:58:09.409718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.930 [2024-12-10 22:58:09.409734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.930 [2024-12-10 22:58:09.409952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.930 [2024-12-10 22:58:09.410160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.930 [2024-12-10 22:58:09.410178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.930 [2024-12-10 22:58:09.410191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.930 [2024-12-10 22:58:09.410202] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.930 [2024-12-10 22:58:09.422440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.930 [2024-12-10 22:58:09.422811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.930 [2024-12-10 22:58:09.422839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.930 [2024-12-10 22:58:09.422855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.930 [2024-12-10 22:58:09.423094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.930 [2024-12-10 22:58:09.423299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.930 [2024-12-10 22:58:09.423323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.930 [2024-12-10 22:58:09.423336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.930 [2024-12-10 22:58:09.423348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.930 [2024-12-10 22:58:09.435628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.930 [2024-12-10 22:58:09.436024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.930 [2024-12-10 22:58:09.436056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.930 [2024-12-10 22:58:09.436073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.930 [2024-12-10 22:58:09.436309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.930 [2024-12-10 22:58:09.436514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.930 [2024-12-10 22:58:09.436557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.930 [2024-12-10 22:58:09.436581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.930 [2024-12-10 22:58:09.436611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.930 [2024-12-10 22:58:09.448804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.930 [2024-12-10 22:58:09.449161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.930 [2024-12-10 22:58:09.449191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.930 [2024-12-10 22:58:09.449208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.930 [2024-12-10 22:58:09.449453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.930 [2024-12-10 22:58:09.449698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.930 [2024-12-10 22:58:09.449718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.930 [2024-12-10 22:58:09.449731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.930 [2024-12-10 22:58:09.449744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.930 [2024-12-10 22:58:09.461973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.930 [2024-12-10 22:58:09.462351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.930 [2024-12-10 22:58:09.462423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.930 [2024-12-10 22:58:09.462440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.930 [2024-12-10 22:58:09.462706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.462923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.462944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.462957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.462974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.475116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.475463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.475492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.475508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.475791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.476020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.476040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.476053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.476065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.488204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.488499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.488542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.488571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.488791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.488997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.489018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.489030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.489042] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.501425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.501782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.501811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.501828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.502067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.502273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.502293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.502305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.502317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.514484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.514840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.514869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.514886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.515104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.515308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.515328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.515340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.515352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.527671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.528070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.528126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.528142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.528388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.528624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.528646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.528659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.528672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.540705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.541081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.541110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.541125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.541358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.541591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.541611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.541639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.541653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.553878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.554234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.554264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.554280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.554522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.554770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.554792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.554806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.554819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 [2024-12-10 22:58:09.567103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.931 [2024-12-10 22:58:09.567450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.931 [2024-12-10 22:58:09.567479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.931 [2024-12-10 22:58:09.567495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.931 [2024-12-10 22:58:09.567748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.931 [2024-12-10 22:58:09.567975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.931 [2024-12-10 22:58:09.567995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.931 [2024-12-10 22:58:09.568007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.931 [2024-12-10 22:58:09.568020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 175776 Killed "${NVMF_APP[@]}" "$@" 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=176725 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 176725 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 176725 ']' 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.931 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.932 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.932 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.932 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.932 [2024-12-10 22:58:09.580646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.932 [2024-12-10 22:58:09.581066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.932 [2024-12-10 22:58:09.581095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.932 [2024-12-10 22:58:09.581116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.932 [2024-12-10 22:58:09.581355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.932 [2024-12-10 22:58:09.581582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.932 [2024-12-10 22:58:09.581604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.932 [2024-12-10 22:58:09.581620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.932 [2024-12-10 22:58:09.581633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.932 [2024-12-10 22:58:09.594095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.932 [2024-12-10 22:58:09.594450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.932 [2024-12-10 22:58:09.594478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.932 [2024-12-10 22:58:09.594494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.932 [2024-12-10 22:58:09.594735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.932 [2024-12-10 22:58:09.594975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.932 [2024-12-10 22:58:09.594995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.932 [2024-12-10 22:58:09.595007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.932 [2024-12-10 22:58:09.595020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.932 [2024-12-10 22:58:09.607734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.932 [2024-12-10 22:58:09.608152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.932 [2024-12-10 22:58:09.608182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.932 [2024-12-10 22:58:09.608198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.932 [2024-12-10 22:58:09.608430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.932 [2024-12-10 22:58:09.608678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.932 [2024-12-10 22:58:09.608701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.932 [2024-12-10 22:58:09.608716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.932 [2024-12-10 22:58:09.608729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.932 [2024-12-10 22:58:09.621119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.932 [2024-12-10 22:58:09.621552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.932 [2024-12-10 22:58:09.621596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.932 [2024-12-10 22:58:09.621613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.932 [2024-12-10 22:58:09.621847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.932 [2024-12-10 22:58:09.622069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.932 [2024-12-10 22:58:09.622088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.932 [2024-12-10 22:58:09.622102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.932 [2024-12-10 22:58:09.622114] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.932 [2024-12-10 22:58:09.625299] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:27:01.932 [2024-12-10 22:58:09.625358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.932 [2024-12-10 22:58:09.634624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.932 [2024-12-10 22:58:09.634975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.932 [2024-12-10 22:58:09.635004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.932 [2024-12-10 22:58:09.635020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.932 [2024-12-10 22:58:09.635230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.932 [2024-12-10 22:58:09.635496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.932 [2024-12-10 22:58:09.635515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.932 [2024-12-10 22:58:09.635552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.932 [2024-12-10 22:58:09.635568] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:01.932 [2024-12-10 22:58:09.647831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:01.932 [2024-12-10 22:58:09.648187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.932 [2024-12-10 22:58:09.648217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:01.932 [2024-12-10 22:58:09.648234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:01.932 [2024-12-10 22:58:09.648478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:01.932 [2024-12-10 22:58:09.648721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:01.932 [2024-12-10 22:58:09.648742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:01.932 [2024-12-10 22:58:09.648755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:01.932 [2024-12-10 22:58:09.648768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.192 [2024-12-10 22:58:09.661165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.192 [2024-12-10 22:58:09.661561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-12-10 22:58:09.661591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.192 [2024-12-10 22:58:09.661623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.192 [2024-12-10 22:58:09.661863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.192 [2024-12-10 22:58:09.662093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.192 [2024-12-10 22:58:09.662112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.192 [2024-12-10 22:58:09.662125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.192 [2024-12-10 22:58:09.662136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.192 [2024-12-10 22:58:09.674586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.192 [2024-12-10 22:58:09.674969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-12-10 22:58:09.674998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.192 [2024-12-10 22:58:09.675015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.192 [2024-12-10 22:58:09.675241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.192 [2024-12-10 22:58:09.675453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.192 [2024-12-10 22:58:09.675472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.192 [2024-12-10 22:58:09.675485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.192 [2024-12-10 22:58:09.675497] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.192 [2024-12-10 22:58:09.687974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.192 [2024-12-10 22:58:09.688330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-12-10 22:58:09.688360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.192 [2024-12-10 22:58:09.688377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.192 [2024-12-10 22:58:09.688636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.192 [2024-12-10 22:58:09.688855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.192 [2024-12-10 22:58:09.688889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.192 [2024-12-10 22:58:09.688902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.688914] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.700470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:02.193 [2024-12-10 22:58:09.701411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.701857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.701887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.701904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.702148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.702359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.702384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.702397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.702409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.714800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.715418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.715460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.715483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.715750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.715971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.715991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.716007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.716021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.728128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.728504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.728557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.728576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.728806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.729036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.729057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.729071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.729084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.741390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.741845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.741876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.741908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.742146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.742343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.742363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.742377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.742402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.754801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.755160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.755189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.755205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.755443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.755687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.755709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.755723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.755736] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.757852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.193 [2024-12-10 22:58:09.757883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.193 [2024-12-10 22:58:09.757897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.193 [2024-12-10 22:58:09.757908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.193 [2024-12-10 22:58:09.757918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.193 [2024-12-10 22:58:09.759280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.193 [2024-12-10 22:58:09.759345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.193 [2024-12-10 22:58:09.759348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.193 [2024-12-10 22:58:09.768270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.768765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.768806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.768827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.769083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.769296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.769317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.769334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.769349] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.781848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.782406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.782447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.782468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.782721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.782976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.782997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.783015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.783029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.795476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.796124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-12-10 22:58:09.796166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.193 [2024-12-10 22:58:09.796187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.193 [2024-12-10 22:58:09.796442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.193 [2024-12-10 22:58:09.796688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.193 [2024-12-10 22:58:09.796711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.193 [2024-12-10 22:58:09.796729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.193 [2024-12-10 22:58:09.796744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.193 [2024-12-10 22:58:09.809034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.193 [2024-12-10 22:58:09.809554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.809595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.809616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.809896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.810111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.810133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.810150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.810166] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 [2024-12-10 22:58:09.822595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.823085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.823124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.823145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.823400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.823654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.823690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.823709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.823725] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 [2024-12-10 22:58:09.836155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.836662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.836705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.836727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.836972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.837187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.837208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.837225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.837239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 [2024-12-10 22:58:09.849654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.850055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.850085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.850102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.850335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.850603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.850627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.850642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.850655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 [2024-12-10 22:58:09.863239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.863556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.863600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.863618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.863850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.864074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.864096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.864110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.864123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 [2024-12-10 22:58:09.876968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.877316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.877347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.877363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.877606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.877822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.877843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.877858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.877871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 [2024-12-10 22:58:09.890607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.890997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.891027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.891044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.891276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.891489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.891511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.891525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.891539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 [2024-12-10 22:58:09.904178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.904518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.904555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.904575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.904793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.194 [2024-12-10 22:58:09.905023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.905045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.905059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.905073] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.194 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:02.194 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.194 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:02.194 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.194 [2024-12-10 22:58:09.917932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.194 [2024-12-10 22:58:09.918286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-12-10 22:58:09.918320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.194 [2024-12-10 22:58:09.918341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.194 [2024-12-10 22:58:09.918609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.194 [2024-12-10 22:58:09.918832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.194 [2024-12-10 22:58:09.918855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.194 [2024-12-10 22:58:09.918870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.194 [2024-12-10 22:58:09.918897] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.456 [2024-12-10 22:58:09.931603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.456 [2024-12-10 22:58:09.932026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.456 [2024-12-10 22:58:09.932058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.456 [2024-12-10 22:58:09.932076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.456 [2024-12-10 22:58:09.932308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.456 [2024-12-10 22:58:09.932558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.456 [2024-12-10 22:58:09.932581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.456 [2024-12-10 22:58:09.932605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.456 [2024-12-10 22:58:09.932618] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.456 [2024-12-10 22:58:09.935484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.456 [2024-12-10 22:58:09.945508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.456 [2024-12-10 22:58:09.945889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.456 [2024-12-10 22:58:09.945923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.456 [2024-12-10 22:58:09.945948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.456 [2024-12-10 22:58:09.946183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.456 [2024-12-10 22:58:09.946409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.456 [2024-12-10 22:58:09.946430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.456 [2024-12-10 22:58:09.946444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.456 [2024-12-10 22:58:09.946457] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.456 [2024-12-10 22:58:09.959138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.456 [2024-12-10 22:58:09.959603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.456 [2024-12-10 22:58:09.959635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.456 [2024-12-10 22:58:09.959654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.456 [2024-12-10 22:58:09.959889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.456 [2024-12-10 22:58:09.960114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.456 [2024-12-10 22:58:09.960134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.456 [2024-12-10 22:58:09.960148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.456 [2024-12-10 22:58:09.960162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.456 Malloc0 00:27:02.456 [2024-12-10 22:58:09.972765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.456 [2024-12-10 22:58:09.973266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.456 [2024-12-10 22:58:09.973304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.456 [2024-12-10 22:58:09.973326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.456 [2024-12-10 22:58:09.973561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.456 [2024-12-10 22:58:09.973817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.456 [2024-12-10 22:58:09.973864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.456 [2024-12-10 22:58:09.973892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.456 [2024-12-10 22:58:09.973918] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.456 [2024-12-10 22:58:09.986289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.456 [2024-12-10 22:58:09.986672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.456 [2024-12-10 22:58:09.986706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7dee0 with addr=10.0.0.2, port=4420 00:27:02.456 [2024-12-10 22:58:09.986724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7dee0 is same with the state(6) to be set 00:27:02.456 [2024-12-10 22:58:09.986958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7dee0 (9): Bad file descriptor 00:27:02.456 [2024-12-10 22:58:09.987181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.456 [2024-12-10 22:58:09.987204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.456 [2024-12-10 22:58:09.987218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.456 [2024-12-10 22:58:09.987231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.456 [2024-12-10 22:58:09.992684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.456 22:58:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 176013 00:27:02.456 [2024-12-10 22:58:09.999874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.456 3524.00 IOPS, 13.77 MiB/s [2024-12-10T21:58:10.188Z] [2024-12-10 22:58:10.154075] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:04.773 4180.29 IOPS, 16.33 MiB/s [2024-12-10T21:58:13.442Z] 4732.12 IOPS, 18.48 MiB/s [2024-12-10T21:58:14.380Z] 5170.11 IOPS, 20.20 MiB/s [2024-12-10T21:58:15.316Z] 5515.10 IOPS, 21.54 MiB/s [2024-12-10T21:58:16.258Z] 5800.09 IOPS, 22.66 MiB/s [2024-12-10T21:58:17.200Z] 6032.92 IOPS, 23.57 MiB/s [2024-12-10T21:58:18.171Z] 6234.69 IOPS, 24.35 MiB/s [2024-12-10T21:58:19.550Z] 6403.71 IOPS, 25.01 MiB/s [2024-12-10T21:58:19.550Z] 6537.80 IOPS, 25.54 MiB/s 00:27:11.818 Latency(us) 00:27:11.818 [2024-12-10T21:58:19.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.818 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:11.818 Verification LBA range: start 0x0 length 0x4000 00:27:11.818 Nvme1n1 : 15.01 6540.46 25.55 10324.72 0.00 7566.73 843.47 18641.35 00:27:11.818 [2024-12-10T21:58:19.550Z] =================================================================================================================== 00:27:11.818 [2024-12-10T21:58:19.550Z] Total : 6540.46 25.55 10324.72 0.00 7566.73 843.47 18641.35 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.818 rmmod nvme_tcp 00:27:11.818 rmmod nvme_fabrics 00:27:11.818 rmmod nvme_keyring 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 176725 ']' 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 176725 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 176725 ']' 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 176725 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176725 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176725' 00:27:11.818 killing process with pid 176725 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 176725 00:27:11.818 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 176725 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.079 22:58:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.622 00:27:14.622 real 0m22.669s 00:27:14.622 user 1m0.734s 00:27:14.622 sys 0m4.135s 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.622 ************************************ 00:27:14.622 END TEST nvmf_bdevperf 00:27:14.622 ************************************ 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.622 ************************************ 00:27:14.622 START TEST nvmf_target_disconnect 00:27:14.622 ************************************ 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:14.622 * Looking for test storage... 00:27:14.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.622 --rc genhtml_branch_coverage=1 00:27:14.622 --rc genhtml_function_coverage=1 00:27:14.622 --rc genhtml_legend=1 00:27:14.622 --rc geninfo_all_blocks=1 00:27:14.622 --rc geninfo_unexecuted_blocks=1 00:27:14.622 00:27:14.622 ' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.622 --rc genhtml_branch_coverage=1 00:27:14.622 --rc genhtml_function_coverage=1 00:27:14.622 --rc genhtml_legend=1 00:27:14.622 --rc geninfo_all_blocks=1 00:27:14.622 --rc geninfo_unexecuted_blocks=1 00:27:14.622 00:27:14.622 ' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.622 --rc genhtml_branch_coverage=1 00:27:14.622 --rc genhtml_function_coverage=1 00:27:14.622 --rc genhtml_legend=1 00:27:14.622 --rc geninfo_all_blocks=1 00:27:14.622 --rc geninfo_unexecuted_blocks=1 00:27:14.622 00:27:14.622 ' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.622 --rc genhtml_branch_coverage=1 00:27:14.622 --rc genhtml_function_coverage=1 00:27:14.622 --rc genhtml_legend=1 00:27:14.622 --rc geninfo_all_blocks=1 00:27:14.622 --rc geninfo_unexecuted_blocks=1 00:27:14.622 00:27:14.622 ' 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.622 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.623 22:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.623 22:58:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.530 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.530 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.530 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.530 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.530 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:27:16.531 00:27:16.531 --- 10.0.0.2 ping statistics --- 00:27:16.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.531 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:27:16.531 00:27:16.531 --- 10.0.0.1 ping statistics --- 00:27:16.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.531 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.531 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:16.790 ************************************ 00:27:16.790 START TEST nvmf_target_disconnect_tc1 00:27:16.790 ************************************ 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:16.790 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.791 [2024-12-10 22:58:24.386584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.791 [2024-12-10 22:58:24.386649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2047f40 with addr=10.0.0.2, port=4420 00:27:16.791 [2024-12-10 22:58:24.386697] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:16.791 [2024-12-10 22:58:24.386719] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:16.791 [2024-12-10 22:58:24.386733] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:16.791 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:16.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:16.791 Initializing NVMe Controllers 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:16.791 00:27:16.791 real 0m0.105s 00:27:16.791 user 0m0.049s 00:27:16.791 sys 0m0.052s 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:16.791 ************************************ 00:27:16.791 END TEST nvmf_target_disconnect_tc1 00:27:16.791 ************************************ 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:16.791 ************************************ 00:27:16.791 START TEST nvmf_target_disconnect_tc2 00:27:16.791 ************************************ 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=179890 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 179890 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 179890 ']' 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.791 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.791 [2024-12-10 22:58:24.507517] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:27:16.791 [2024-12-10 22:58:24.507612] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.050 [2024-12-10 22:58:24.580040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.050 [2024-12-10 22:58:24.638633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.050 [2024-12-10 22:58:24.638687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.050 [2024-12-10 22:58:24.638713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.050 [2024-12-10 22:58:24.638724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.050 [2024-12-10 22:58:24.638733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.050 [2024-12-10 22:58:24.640247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:27:17.050 [2024-12-10 22:58:24.640308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:27:17.050 [2024-12-10 22:58:24.640374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:27:17.050 [2024-12-10 22:58:24.640377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:27:17.050 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.050 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:17.050 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:17.050 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.050 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.310 Malloc0 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.310 [2024-12-10 22:58:24.829780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.310 [2024-12-10 22:58:24.858056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=179920 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:17.310 22:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:19.229 22:58:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 179890 00:27:19.229 22:58:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Write completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.229 starting I/O failed 00:27:19.229 Read completed with error (sct=0, sc=8) 00:27:19.230 [2024-12-10 22:58:26.884382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 [2024-12-10 22:58:26.884725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Read completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 Write completed with error (sct=0, sc=8) 00:27:19.230 starting I/O failed 00:27:19.230 [2024-12-10 22:58:26.885072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.230 [2024-12-10 22:58:26.885308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.885349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.885485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.885514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.885665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.885694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.885820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.885858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.885976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.886117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.886150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.886287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.886314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.886407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.886435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.230 [2024-12-10 22:58:26.886561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.230 [2024-12-10 22:58:26.886600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.230 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.886701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.886728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.886848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.886875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.887900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.887927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.888073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.888100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.888237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.888279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.888366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.888394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.888512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.888538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.888637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.888665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.888756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.888783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.888936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.888963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.889082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.889108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.889250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.889277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.889390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.889417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.889508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.889536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.889640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.889667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.889758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.889786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.889930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.889957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.890078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.890106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.890251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.890278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.890391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.890420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.890534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.890572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.890680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.890729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.890863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.890891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.891056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.891084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.891178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.891205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.891313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.891340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.891431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.231 [2024-12-10 22:58:26.891459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.231 qpair failed and we were unable to recover it. 00:27:19.231 [2024-12-10 22:58:26.891567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.891607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.891711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.891739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.891818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.891843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.891962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.891989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.892107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.892133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.892219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.892244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.892359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.892385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.892507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.892536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.892688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.892715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.892855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.892882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.893007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.893034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.893171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.893199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.893326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.893355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.893471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.893499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.893605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.893632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.893724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.893751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.893844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.893870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.894007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.894033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.894179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.894204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.894290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.894316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.894423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.894453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.894587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.894616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.894711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.894739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.894877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.894904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.895046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.895184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.895323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.895488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.895608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.895729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.895877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.895993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.896020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.896163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.896191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.896279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.896307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.896406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.896434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.232 qpair failed and we were unable to recover it. 00:27:19.232 [2024-12-10 22:58:26.896557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.232 [2024-12-10 22:58:26.896585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.896671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.896697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.896783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.896810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.896899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.896927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.897917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.897942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Read completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 Write completed with error (sct=0, sc=8) 00:27:19.233 starting I/O failed 00:27:19.233 [2024-12-10 22:58:26.898251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.233 [2024-12-10 22:58:26.898379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.898407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.898527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.898565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.898662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.898690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.898811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.898838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.898968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.898995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.899111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.899138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.899248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.899273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.899362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.899388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.899490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.899515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.899617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.899643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.899763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.899789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.899880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.233 [2024-12-10 22:58:26.899905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.233 qpair failed and we were unable to recover it. 00:27:19.233 [2024-12-10 22:58:26.900059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.900085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.900197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.900222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.900310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.900335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.900426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.900452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.900584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.900611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.900703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.900730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.900855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.900881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.901968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.901995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.902122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.902173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.902275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.902315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.902434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.902463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.902559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.902588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.902673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.902700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.902783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.902811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.903967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.903994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.234 [2024-12-10 22:58:26.904202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.234 [2024-12-10 22:58:26.904229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.234 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.904373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.904402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.904484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.904511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.904628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.904656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.904749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.904776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.904897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.904924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.905042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.905069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.905165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.905193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.905324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.905365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.905462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.905492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.905597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.905627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.905747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.905774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.905887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.905915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.906032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.906060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.906158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.906186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.906301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.906329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.906442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.906470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.906558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.906595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.906670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.906697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.906816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.906856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.907002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.907029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.907141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.907195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.907315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.907341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.907458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.907486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.907612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.907640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.907764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.907790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.907913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.907940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.908057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.908084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.908222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.908251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.908332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.908357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.908499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.908526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.908662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.908690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.908845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.908886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.909013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.235 [2024-12-10 22:58:26.909042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.235 qpair failed and we were unable to recover it. 00:27:19.235 [2024-12-10 22:58:26.909163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.909191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.909306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.909333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.909451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.909477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.909571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.909599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.909695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.909723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.909836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.909862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.909979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.910096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.910239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.910344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.910477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.910629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.910748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.910867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.910894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.911967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.911995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.912104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.912130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.912247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.912274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.912418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.912445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.912573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.912619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.912768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.912797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.912940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.912968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.913065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.913093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.913232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.913273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.913359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.913387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.913527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.913560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.913676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.913703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.913845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.236 [2024-12-10 22:58:26.914058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.236 [2024-12-10 22:58:26.914085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.236 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.914198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.914225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.914352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.914393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.914525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.914572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.914689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.914730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.914860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.914889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.915950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.915977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.916089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.916116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.916227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.916253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.916369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.916395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.916537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.916576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.916698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.916729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.916817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.916845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.916931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.916958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.917073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.917100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.917191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.917221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.917354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.917394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.917492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.917520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.917638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.917665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.917777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.917804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.917919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.917945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.918020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.918045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.918217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.918268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.918386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.918415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.918536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.918569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.918728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.918757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.918883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.918910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.919119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.237 [2024-12-10 22:58:26.919174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.237 qpair failed and we were unable to recover it. 00:27:19.237 [2024-12-10 22:58:26.919266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.919293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.919408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.919437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.919560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.919590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.919680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.919708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.919826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.919853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.919965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.919993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.920071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.920096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.920217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.920244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.920362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.920391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.920501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.920528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.920654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.920681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.920774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.920803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.920890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.920917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.921044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.921085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.921171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.921199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.921347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.921408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.921514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.921541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.921694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.921720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.921810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.921836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.922051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.922108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.922191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.922216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.922294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.922321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.922483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.922525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.922641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.922675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.922794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.922822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.922935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.922962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.923124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.923176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.923292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.923330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.923475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.923507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.923632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.923673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.923803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.923843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.923966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.923994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.924174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.924202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.924385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.924413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.238 [2024-12-10 22:58:26.924559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.238 [2024-12-10 22:58:26.924586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.238 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.924675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.924702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.924783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.924810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.924906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.924933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.925041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.925067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.925181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.925209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.925303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.925330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.925474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.925500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.925635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.925676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.925803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.925832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.925982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.926124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.926267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.926399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.926540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.926687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.926812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.926926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.926954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.927120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.927184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.927385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.927413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.927490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.927517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.927647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.927673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.927788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.927815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.928011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.928036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.928152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.928179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.928348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.928406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.928528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.928568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.928685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.928713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.928804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.928830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.928915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.928947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.239 [2024-12-10 22:58:26.929170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.239 [2024-12-10 22:58:26.929227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.239 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.929429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.929486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.929598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.929624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.929715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.929741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.929821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.929846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.929937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.929962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.930079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.930195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.930331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.930440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.930610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.930724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.930869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.930989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.931018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.931104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.931132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.931247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.931274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.931388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.931415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.931528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.931562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.931667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.931708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.931831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.931859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.931977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.932004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.932160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.932187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.932331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.932482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.932522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.932627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.932657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.932805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.932835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.932955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.932989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.933217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.933268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.933383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.933411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.933565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.933594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.933709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.933738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.933860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.933888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.934003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.934030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.934153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.934179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.934322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.240 [2024-12-10 22:58:26.934349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.240 qpair failed and we were unable to recover it. 00:27:19.240 [2024-12-10 22:58:26.934463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.934491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.934613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.934654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.934770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.934800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.934944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.934972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.935189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.935216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.935334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.935361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.935494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.935535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.935648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.935677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.935758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.935786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.935875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.935903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.936021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.936051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.936189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.936241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.936385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.936412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.936489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.936514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.936627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.936667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.936815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.936842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.936950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.936976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.937065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.937090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.937237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.937271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.937376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.937417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.937535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.937574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.937717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.937744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.937929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.937989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.938214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.938263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.938373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.938400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.938516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.938550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.938664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.938691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.938785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.938812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.938955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.938982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.939118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.939222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.939250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.939342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.939370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.939510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.939562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.939689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.939718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.939831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.241 [2024-12-10 22:58:26.939858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.241 qpair failed and we were unable to recover it. 00:27:19.241 [2024-12-10 22:58:26.939944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.939971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.940099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.940174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.940266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.940293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.940411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.940444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.940569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.940595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.940714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.940743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.940860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.940889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.941006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.941033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.941129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.941155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.941309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.941336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.941485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.941513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.941636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.941664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.941780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.941807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.941924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.941952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.942075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.942102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.942215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.942243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.942354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.942381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.942521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.942554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.942670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.942697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.942787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.942814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.942930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.942957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.943110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.943138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.943280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.943307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.943421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.943453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.943599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.943627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.943775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.943802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.943919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.943947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.944067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.944095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.944205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.944232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.944348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.944375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.944462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.944489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.944641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.944682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.944789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.944818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.944907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.944935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.945110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.242 [2024-12-10 22:58:26.945160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.242 qpair failed and we were unable to recover it. 00:27:19.242 [2024-12-10 22:58:26.945247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.945275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.945392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.945418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.945540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.945573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.945693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.945720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.945816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.945842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.945952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.945979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.946071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.946099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.946241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.946267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.946384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.946412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.946508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.946534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.946640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.946667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.946749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.946775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.946890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.946917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.947065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.947091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.947188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.947214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.947333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.947360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.947475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.947501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.947631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.947658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.947790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.947831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.948010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.948076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.948230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.948283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.948401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.948427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.948525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.948558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.948697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.948722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.948809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.948835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.948944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.948970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.949067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.949093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.949216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.949243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.949355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.949386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.949506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.949532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.949653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.949680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.949762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.949789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.949884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.949910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.950019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.950048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.950178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.243 [2024-12-10 22:58:26.950205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.243 qpair failed and we were unable to recover it. 00:27:19.243 [2024-12-10 22:58:26.950326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.950351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.950464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.950490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.950607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.950633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.950743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.950770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.950856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.950881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.950994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.951020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.951130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.951156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.951250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.951277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.951405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.951447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.951585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.951627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.951751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.951779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.244 [2024-12-10 22:58:26.951867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.244 [2024-12-10 22:58:26.951894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.244 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.952042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.952182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.952303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.952403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.952553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.952693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.952846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.952978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.953006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.953152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.953184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.953318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.953358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.953476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.953505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.953639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.953668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.953762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.953790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-12-10 22:58:26.953868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-10 22:58:26.953895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.954034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.954062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.954206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.954233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.954371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.954398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.954526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.954562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.954658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.954686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.954804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.954832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.954978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.955148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.955296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.955411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.955562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.955732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.955845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.955957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.955982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.956134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.956174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.956274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.956304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.956416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.956445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.956526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.956561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.956654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.956681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.956769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.956797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.956886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.956913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.957023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.957055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.957172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.957199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.957336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.957363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.957480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.957507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.957591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.957617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.957729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.957757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.957864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.957904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.958022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.958050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.958196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.958223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.958334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.958361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.958454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.958481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.958601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.958630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.958716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-10 22:58:26.958742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-12-10 22:58:26.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.958852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.958982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.959149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.959266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.959374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.959473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.959620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.959787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.959935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.959961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.960077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.960103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.960224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.960253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.960380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.960406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.960531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.960579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.960727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.960755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.960852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.960892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.961040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.961069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.961161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.961189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.961313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.961340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.961452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.961480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.961606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.961637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.961796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.961824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.962007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.962034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.962160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.962211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.962353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.962379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.962491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.962518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.962638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.962667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.962762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.962789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.962875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.962901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-12-10 22:58:26.963956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-12-10 22:58:26.963981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.964065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.964093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.964240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.964267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.964394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.964435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.964588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.964617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.964735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.964762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.964883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.964910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.965032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.965060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.965156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.965187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.965332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.965361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.965472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.965500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.965650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.965677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.965764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.965793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.965914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.965943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.966094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.966150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.966324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.966351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.966471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.966497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.966582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.966607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.966691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.966828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.966859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.966943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.966970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.967120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.967173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.967286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.967312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.967454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.967481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.967588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.967615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.967698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.967727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.967846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.967875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.967985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.968101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.968208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.968338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.968499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.968627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.968778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.968950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.968977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-12-10 22:58:26.969122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-12-10 22:58:26.969149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.969289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.969317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.969450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.969490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.969627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.969655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.969798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.969827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.969904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.969930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.970120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.970175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.970266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.970295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.970451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.970492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.970616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.970647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.970745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.970773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.970893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.970930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.971050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.971075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.971161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.971188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.971305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.971332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.971464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.971505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.971623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.971664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.971816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.971844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.971959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.971985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.972127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.972153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.972283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.972312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.972455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.972483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.972598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.972640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.972786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.972816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.972897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.972924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.973044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.973072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.973189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.973216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.973360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.973387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.973508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.973535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.973638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.973667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.973754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.973781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.973864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.973890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.974000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.974027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.974168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.974194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.974343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.974384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-12-10 22:58:26.974477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-12-10 22:58:26.974505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.974603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.974631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.974729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.974757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.974878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.974905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.975049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.975076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.975253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.975281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.975397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.975438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.975584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.975612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.975719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.975746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.975838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.975865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.975951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.975978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.976092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.976120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.976211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.976239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.976356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.976382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.976473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.976501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.976601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.976630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.976715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.976748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.976862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.976889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.977003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.977030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.977253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.977307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.977401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.977430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.977542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.977579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.977667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.977694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.977780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.977807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.978033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.978095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.978212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.978239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.619 [2024-12-10 22:58:26.978357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.619 [2024-12-10 22:58:26.978385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.619 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.978505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.978533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.978665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.978705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.978858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.978887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.979936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.979962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.980044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.980070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.980158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.980184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.980306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.980335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.980451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.980478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.980609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.980651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.980775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.980809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.980964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.980992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.981106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.981134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.981219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.981245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.981356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.981396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.981490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.981518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.981639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.981667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.981778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.981805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.981921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.981947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.982060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.982086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.982231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.982259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.982385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.982414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.982501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.982529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.982650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.982678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.982828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.982855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.982947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.982975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.983094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.983122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.983206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.983231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.620 [2024-12-10 22:58:26.983388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.620 [2024-12-10 22:58:26.983429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.620 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.983562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.983591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.983707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.983734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.983875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.983902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.983990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.984128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.984231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.984365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.984558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.984677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.984783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.984948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.984974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.985052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.985076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.985190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.985216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.985301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.985326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.985437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.985464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.985588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.985617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.985729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.985755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.985893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.985920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.986065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.986092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.986214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.986242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.986383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.986412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.986502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.986534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.986672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.986714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.986812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.986840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.986958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.986985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.987101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.987128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.987242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.987269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.987411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.987437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.987591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.987620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.987706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.987732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.987827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.987852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.987957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.987983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.988125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.621 [2024-12-10 22:58:26.988151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.621 qpair failed and we were unable to recover it. 00:27:19.621 [2024-12-10 22:58:26.988264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.988323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.988431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.988457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.988540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.988574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.988653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.988679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.988787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.988814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.988897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.988922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.989881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.989907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.990012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.990039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.990123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.990155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.990268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.990294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.990383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.990408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.990559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.990600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.990692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.990721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.990902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.990929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.991051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.991078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.991219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.991292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.991450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.991481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.991598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.991627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.991747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.991773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.991861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.991886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.991999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.992025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.992217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.992387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.992415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.992568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.992608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.992707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.992735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.992837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.992865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.992950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.622 [2024-12-10 22:58:26.992978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.622 qpair failed and we were unable to recover it. 00:27:19.622 [2024-12-10 22:58:26.993095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.993122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.993242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.993271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.993364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.993392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.993475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.993504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.993624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.993650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.993792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.993819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.993958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.993985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.994070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.994096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.994217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.994284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.994438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.994464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.994574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.994601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.994701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.994730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.994884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.994911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.994997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.995024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.995208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.995236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.995381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.995414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.995552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.995594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.995719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.995747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.995861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.995887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.996007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.996034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.996130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.996155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.996246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.996272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.996408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.996448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.996584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.996626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.996722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.996751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.996870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.996898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.997009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.997036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.997128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.997155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.997283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.997311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.997408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.623 [2024-12-10 22:58:26.997438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.623 qpair failed and we were unable to recover it. 00:27:19.623 [2024-12-10 22:58:26.997559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.997587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.997707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.997734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.997850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.997878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.997994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.998020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.998112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.998139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.998279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.998306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.998423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.998449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.998536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.998577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.998671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.998699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.998841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.998868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.998986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.999013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.999153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.999180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.999295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.999323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.999415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.999441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.999563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.999593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.999737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.999764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.999839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:26.999865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:26.999992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.000137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.000293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.000427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.000540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.000695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.000803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.000957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.000984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.001096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.001123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.624 [2024-12-10 22:58:27.001218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.624 [2024-12-10 22:58:27.001245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.624 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.001366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.001394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.001519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.001551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.001644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.001670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.001782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.001809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.001900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.001928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.002076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.002103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.002219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.002246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.002389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.002416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.002565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.002592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.002716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.002743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.002888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.002914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.003012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.003038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.003147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.003174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.003315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.003341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.003485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.003512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.003613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.003641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.003765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.003792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.003901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.003927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.004042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.004070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.004211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.004239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.004324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.004350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.004473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.004514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.004629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.004659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.004754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.004783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.004868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.004896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.005018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.005046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.005163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.005191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.005302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.005329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.005462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.005503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.005598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.005631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.005770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.005797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.005947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.005997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.006193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.625 [2024-12-10 22:58:27.006220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.625 qpair failed and we were unable to recover it. 00:27:19.625 [2024-12-10 22:58:27.006385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.006443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.006536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.006569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.006694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.006721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.006819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.006848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.006967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.006995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.007105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.007131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.007273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.007323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.007455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.007497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.007632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.007662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.007745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.007773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.007848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.007876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.008020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.008047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.008232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.008294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.008393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.008422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.008560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.008588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.008731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.008758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.008933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.008982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.009073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.009101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.009248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.009275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.009388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.009416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.009567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.009594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.009686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.009712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.009856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.009883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.010054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.010108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.010337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.010390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.010535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.010572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.010694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.010721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.010813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.010840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.010919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.010945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.011109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.011165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.011396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.011424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.011590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.011632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.011764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.011804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.011904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.626 [2024-12-10 22:58:27.011956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.626 qpair failed and we were unable to recover it. 00:27:19.626 [2024-12-10 22:58:27.012267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.012336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.012564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.012603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.012716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.012743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.012836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.012864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.012983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.013010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.013189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.013217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.013403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.013432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.013569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.013617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.013780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.013821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.013970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.014006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.014114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.014140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.014304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.014350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.014464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.014498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.014633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.014662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.014757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.014785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.014925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.014953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.015048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.015075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.015152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.015178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.015322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.015351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.015491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.015518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.015727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.015758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.015846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.015871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.016022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.016049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.016185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.016211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.016294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.016319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.016446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.016487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.016587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.016616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.016706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.016734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.016813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.016840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.017052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.017108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.017267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.017319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.017434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.017461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.017616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.017647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.627 [2024-12-10 22:58:27.017767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.627 [2024-12-10 22:58:27.017795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.627 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.017914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.017953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.018070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.018097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.018368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.018427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.018638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.018667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.018810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.018847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.019082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.019149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.019386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.019451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.019637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.019666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.019788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.019816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.019904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.019932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.020050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.020083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.020211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.020239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.020353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.020392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.020481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.020507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.020614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.020642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.020720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.020745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.020896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.020924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.021077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.021117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.021239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.021268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.021389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.021417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.021501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.021528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.021655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.021683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.021815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.021842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.021986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.022015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.022233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.022293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.022414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.022443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.022587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.022615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.022735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.022763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.022879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.022909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.023048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.023098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.023268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.023319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.628 qpair failed and we were unable to recover it. 00:27:19.628 [2024-12-10 22:58:27.023458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.628 [2024-12-10 22:58:27.023485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.023615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.023657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.023757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.023787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.023905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.023933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.024260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.024287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.024402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.024429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.024587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.024629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.024762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.024791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.024936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.024963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.025080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.025107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.025265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.025308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.025399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.025427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.025541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.025575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.025718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.025745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.025837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.025864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.026014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.026042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.026133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.026193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.026471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.026498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.026590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.026618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.026743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.026774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.026957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.027009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.027186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.027239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.027358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.027387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.027565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.027606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.027693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.027721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.027839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.027866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.027982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.028009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.028123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.028184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.028310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.028361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.028458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.028499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.028644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.028684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.028811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.028839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.029026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.029089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.629 [2024-12-10 22:58:27.029265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.629 [2024-12-10 22:58:27.029323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.629 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.029414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.029441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.029552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.029595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.029729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.029771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.029891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.029921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.030873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.030985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.031162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.031334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.031478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.031602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.031714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.031854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.031965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.031991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.032077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.032103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.032216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.032242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.032351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.032378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.032464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.032490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.032574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.032602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.032680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.032707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.630 [2024-12-10 22:58:27.032795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.630 [2024-12-10 22:58:27.032824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.630 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.032943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.032971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.033059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.033086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.033199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.033227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.033323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.033351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.033473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.033500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.033599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.033628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.033739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.033765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.033912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.033963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.034197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.034250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.034455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.034525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.034738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.034779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.034928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.034967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.035160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.035207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.035338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.035378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.035468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.035493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.035610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.035638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.035757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.035798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.035886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.035911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.035991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.036015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.036169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.036224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.036361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.036400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.036489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.036513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.036608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.036634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.036755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.036806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.036954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.036983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.037080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.037107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.037288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.037342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.037471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.037512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.037668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.037698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.037789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.037817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.038033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.038062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.038243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.038302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.038456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.038485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.038604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.038629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.631 qpair failed and we were unable to recover it. 00:27:19.631 [2024-12-10 22:58:27.038740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.631 [2024-12-10 22:58:27.038769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.038847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.038871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.038997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.039051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.039280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.039330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.039474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.039505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.039616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.039648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.039795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.039823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.039938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.039966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.040134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.040188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.040333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.040380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.040501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.040529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.040652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.040681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.040814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.040854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.041068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.041138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.041478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.041562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.041696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.041732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.041848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.041873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.041952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.041980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.042068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.042097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.042300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.042328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.042434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.042475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.042594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.042636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.042764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.042792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.042913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.042941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.043049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.043076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.043154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.043303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.043332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.043441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.043482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.043605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.043635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.043753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.043779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.043892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.043919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.044029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.044055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.044146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.044177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.044287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.044316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.044391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.632 [2024-12-10 22:58:27.044417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.632 qpair failed and we were unable to recover it. 00:27:19.632 [2024-12-10 22:58:27.044536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.044571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.044661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.044688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.044774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.044801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.044913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.044940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.045028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.045056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.045177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.045202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.045314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.045341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.045482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.045508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.045615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.045656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.045810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.045883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.046170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.046238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.046452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.046480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.046602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.046631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.046717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.046743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.046857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.046884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.046994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.047021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.047131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.047159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.047268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.047295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.047408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.047566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.047608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.047756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.047785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.047932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.047960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.048088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.048116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.048236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.048264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.048353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.048380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.048491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.048519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.048632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.048673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.048826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.048855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.048967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.048994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.633 [2024-12-10 22:58:27.049110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.633 [2024-12-10 22:58:27.049137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.633 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.049276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.049302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.049421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.049449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.049549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.049577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.049656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.049681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.049753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.049778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.049917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.049944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.050936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.050962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.051042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.051067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.051179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.051207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.051301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.051342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.051463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.051491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.051634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.051662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.051772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.051798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.051935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.051963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.052082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.052109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.052271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.052322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.052412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.052439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.052580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.052607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.052724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.052751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.052858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.052883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.052995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.053022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.053138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.053166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.053278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.053306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.053380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.053405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.053484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.053509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.053628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.634 [2024-12-10 22:58:27.053656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.634 qpair failed and we were unable to recover it. 00:27:19.634 [2024-12-10 22:58:27.053744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.053771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.053882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.053914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.054059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.054085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.054211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.054251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.054350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.054379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.054467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.054496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.054596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.054626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.054724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.054752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.054884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.054925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.055014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.055043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.055262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.055316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.055425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.055453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.055540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.055572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.055694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.055722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.055839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.055889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.056154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.056221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.056479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.056560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.056677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.056704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.056845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.056884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.057074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.057142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.057302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.057331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.057449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.057477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.057615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.057657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.057778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.057809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.057928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.057956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.058099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.058126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.058281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.635 [2024-12-10 22:58:27.058309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-12-10 22:58:27.058448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.058476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.058606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.058635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.058724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.058752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.058885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.058926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.059089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.059151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.059388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.059440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.059584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.059612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.059726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.059752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.059872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.059899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.060072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.060129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.060285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.060338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.060454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.060481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.060603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.060629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.060713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.060738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.060843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.060869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.060983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.061011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.061097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.061122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.061265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.061292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.061385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.061412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.061520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.061635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.061761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.061788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.061885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.061912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.062006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.062032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.062174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.062201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.062345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.062371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.062495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.062673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.062715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.062878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.062920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.063068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.063139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.063289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.063351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.063463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.063489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.063571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.063596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.063676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.063702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.063821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.063851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-12-10 22:58:27.064072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.636 [2024-12-10 22:58:27.064147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.064417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.064484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.064700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.064729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.064845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.064872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.064964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.065046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.065201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.065234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.065336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.065362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.065494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.065539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.065675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.065715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.065837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.065866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.066051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.066079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.066307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.066361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.066469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.066496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.066593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.066620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.066706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.066731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.066901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.066951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.067113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.067173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.067328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.067386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.067470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.067494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.067596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.067622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.067711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.067737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.067865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.067906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.067997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.068024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.068159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.068215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.068355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.068395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.068493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.068518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.068645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.068672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.068786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.068811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.068922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.068949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.069065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.069092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.069177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.069204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.069353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.069379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.069509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.069557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.069687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.069715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.069835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.637 [2024-12-10 22:58:27.069868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-12-10 22:58:27.069986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.070015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.070155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.070183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.070299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.070327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.070460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.070500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.070630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.070659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.070779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.070808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.070925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.070953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.071128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.071185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.071413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.071471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.071571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.071598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.071682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.071707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.071801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.071827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.071942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.071970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.072118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.072144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.072258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.072284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.072395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.072421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.072507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.072538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.072647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.072674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.072782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.072808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.072915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.072942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.073052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.073079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.073170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.073198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.073273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.073297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.073439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.073466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.073584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.073611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.073705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.073732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.073843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.073875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.074940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.074965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-12-10 22:58:27.075073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.638 [2024-12-10 22:58:27.075097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.075176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.075200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.075279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.075309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.075417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.075442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.075526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.075561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.075662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.075687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.075798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.075824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.075941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.075966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.076969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.076994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.077107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.077132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.077273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.077298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.077415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.077440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.077574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.077614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.077733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.077761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.077861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.077888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.077985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.078096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.078218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.078367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.078520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.078651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.078787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.078921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.078946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.079029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.079054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.079132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.079156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.079244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.079269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.079386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.079411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.079531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.079567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.079707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.079731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.639 qpair failed and we were unable to recover it. 00:27:19.639 [2024-12-10 22:58:27.079876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.639 [2024-12-10 22:58:27.079902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.080015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.080040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.080154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.080179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.080294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.080318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.080458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.080483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.080571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.080597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.080766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.080815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.080980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.081030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.081118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.081144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.081256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.081281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.081431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.081457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.081576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.081602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.081771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.081828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.082056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.082081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.082172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.082197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.082293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.082317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.082456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.082481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.082562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.082587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.082755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.082806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.082886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.082912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.083018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.083042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.083136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.083161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.083276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.083301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.083461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.083500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.083609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.083638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.083780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.083808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.083900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.083927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.084038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.084064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.084151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.084177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.084299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.084325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.084446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.084473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.084592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.640 [2024-12-10 22:58:27.084619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.640 qpair failed and we were unable to recover it. 00:27:19.640 [2024-12-10 22:58:27.084793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.084843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.084932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.084957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.085962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.085988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.086111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.086138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.086258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.086353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.086379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.086524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.086557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.086681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.086708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.086798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.086825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.086937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.086963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.087110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.087136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.087232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.087258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.087374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.087401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.087493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.087518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.087638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.087664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.087810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.087835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.087975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.088000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.088109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.088133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.088249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.088275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.088421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.088446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.088559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.088586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.088725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.088767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.088900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.088944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.089077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.089122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.089262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.089291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.089432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.089456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.089542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.089574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.641 [2024-12-10 22:58:27.089658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.641 [2024-12-10 22:58:27.089683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.641 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.089819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.089864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.089972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.090133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.090272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.090437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.090555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.090666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.090778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.090942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.090967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.091957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.091982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.092118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.092198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.092223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.092318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.092344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.092470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.092601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.092628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.092722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.092749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.092869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.092895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.093960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.093985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.094079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.094104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.094192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.094217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.642 [2024-12-10 22:58:27.094301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.642 [2024-12-10 22:58:27.094327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.642 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.094446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.094472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.094593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.094619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.094738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.094764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.094882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.094907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.095050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.095075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.095194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.095219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.095335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.095360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.095449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.095473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.095587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.095612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.095757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.095783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.095895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.095920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.096060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.096228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.096344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.096486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.096604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.096747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.096898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.096980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.097004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.097092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.097117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.097231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.097256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.097364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.097390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.097471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.097496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.097641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.097838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.097864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.098925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.098952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.099047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.643 [2024-12-10 22:58:27.099072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.643 qpair failed and we were unable to recover it. 00:27:19.643 [2024-12-10 22:58:27.099145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.099171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.099284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.099310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.099389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.099414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.099568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.099607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.099725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.099753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.099901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.099928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.100026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.100052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.100167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.100193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.100313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.100339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.100468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.100494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.100623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.100649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.100797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.100829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.100963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.100994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.101099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.101131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.101295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.101328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.101467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.101496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.101612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.101638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.101781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.101826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.101962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.101992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.102175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.102218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.102356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.102381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.102471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.102497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.102599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.102625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.102734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.102779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.102872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.102898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.103948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.103973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.104092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.104117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.104232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.104258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.104399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.104424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.644 qpair failed and we were unable to recover it. 00:27:19.644 [2024-12-10 22:58:27.104508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.644 [2024-12-10 22:58:27.104533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.104631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.104657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.104743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.104768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.104907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.104932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.105938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.105964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.106100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.106129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.106258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.106284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.106428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.106454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.106606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.106632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.106723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.106748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.106833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.106864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.106981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.107092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.107230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.107375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.107493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.107651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.107768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.107901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.107926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.108128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.108154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.108248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.108273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.108387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.645 qpair failed and we were unable to recover it. 00:27:19.645 [2024-12-10 22:58:27.108527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.645 [2024-12-10 22:58:27.108560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.108650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.108676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.108789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.108813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.108898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.108923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.109945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.109971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.110115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.110224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.110361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.110477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.110617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.110756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.110867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.110987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.111012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.111128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.111153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.111277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.111316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.111463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.111491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.111638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.111666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.111814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.111841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.111923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.111953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.112044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.112071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.112175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.112201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.112295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.112322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.112442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.112468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.112590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.112617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.112735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.112761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.112923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.112955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.113087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.113119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.113285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.646 [2024-12-10 22:58:27.113316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.646 qpair failed and we were unable to recover it. 00:27:19.646 [2024-12-10 22:58:27.113449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.113482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.113631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.113659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.113768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.113799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.113894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.113921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.114066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.114092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.114198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.114230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.114418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.114582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.114627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.114742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.114769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.114934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.114965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.115122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.115153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.115346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.115410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.115567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.115614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.115739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.115765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.115913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.115939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.116078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.116109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.116277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.116308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.116448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.116479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.116645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.116673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.116767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.116793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.116966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.116998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.117158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.117190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.117360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.117426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.117592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.117618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.117742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.117769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.117928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.117993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.118136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.118163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.118362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.118427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.118597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.118625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.647 [2024-12-10 22:58:27.118775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.647 [2024-12-10 22:58:27.118802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.647 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.118980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.119006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.119120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.119175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.119339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.119370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.119501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.119532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.119681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.119708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.119798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.119824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.119936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.119968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.120109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.120141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.120295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.120326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.120483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.120514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.120630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.120656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.120765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.120792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.120929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.120960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.121098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.121125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.121253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.121284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.121446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.121477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.121612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.121638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.121754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.121780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.121941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.121972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.122107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.122138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.122331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.122363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.122498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.122530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.122657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.122684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.122774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.122800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.122927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.122953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.123069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.123096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.123272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.123303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.123432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.123464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.123638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.123665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.123775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.123802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.123916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.123947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.124086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.124119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.124272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.124304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.124437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.124468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.124637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.124664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.648 [2024-12-10 22:58:27.124789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.648 [2024-12-10 22:58:27.124815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.648 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.124967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.124993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.125107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.125134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.125282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.125313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.125483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.125515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.125626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.125653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.125738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.125765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.125925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.126085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.126117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.126267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.126298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.126431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.126462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.126604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.126631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.126749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.126775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.126917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.126943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.127062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.127107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.127211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.127243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.127383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.127415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.127516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.127551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.127645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.127671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.127812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.127858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.127987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.128019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.128171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.128202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.128329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.128360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.128500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.128531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.128673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.128699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.128839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.128865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.128978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.129024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.129157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.129188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.129354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.129385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.129530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.129562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.129682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.129708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.129827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.129853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.129966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.129998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.130130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.130161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.130320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.130351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.130510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.130540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.130715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.649 [2024-12-10 22:58:27.130742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.649 qpair failed and we were unable to recover it. 00:27:19.649 [2024-12-10 22:58:27.130858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.130976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.131002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.131099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.131131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.131277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.131309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.131492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.131522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.131687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.131714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.131848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.131879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.132010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.132042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.132178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.132203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.132322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.132348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.132467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.132498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.132648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.132680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.132813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.132845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.132980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.133013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.133146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.133177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.133339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.133370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.133478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.133510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.133650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.133681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.133821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.133852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.133994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.134027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.134120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.134156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.134289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.134320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.134455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.134486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.134599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.134631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.134761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.134792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.134928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.134959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.135088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.135119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.135254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.135286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.135444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.135477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.135611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.650 [2024-12-10 22:58:27.135643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.650 qpair failed and we were unable to recover it. 00:27:19.650 [2024-12-10 22:58:27.135776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.135808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.135946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.135977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.136085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.136117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.136250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.136281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.136419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.136450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.136586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.136618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.136721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.136752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.136849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.136880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.136978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.137010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.137173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.137205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.137361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.137406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.137526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.137564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.137690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.137722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.137828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.137859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.138010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.138042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.138181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.138213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.138358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.138478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.138509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.138649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.138681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.138786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.138818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.138909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.138940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.139112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.139144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.139247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.139278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.139398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.139430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.139566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.139598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.139731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.139762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.139894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.139926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.140051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.140082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.140220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.140252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.140388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.140419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.140558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.140596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.140702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.140733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.651 [2024-12-10 22:58:27.140865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.651 [2024-12-10 22:58:27.140896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.651 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.141038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.141071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.141230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.141261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.141365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.141396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.141503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.141535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.141663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.141695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.141817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.141849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.142010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.142041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.142150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.142183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.142288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.142319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.142447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.142478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.142585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.142617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.142735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.142767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.142935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.142966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.143125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.143156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.143259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.143291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.143435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.143466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.143579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.143611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.143774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.143848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.144036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.144091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.144233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.144290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.144427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.144492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.144630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.144696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.144884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.144956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.145080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.145144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.145340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.145398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.145562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.145594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.145728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.145759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.145866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.145897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.146028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.146059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.146181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.146212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.146348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.146379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.146518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.146554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.146677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.146709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.146841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.652 [2024-12-10 22:58:27.146872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.652 qpair failed and we were unable to recover it. 00:27:19.652 [2024-12-10 22:58:27.147015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.147046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.147176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.147207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.147356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.147388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.147493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.147529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.147717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.147748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.147855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.147886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.148045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.148077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.148209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.148241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.148381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.148412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.148555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.148587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.148688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.148720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.148860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.148891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.149023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.149054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.149162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.149193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.149301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.149332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.149440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.149471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.149617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.149649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.149763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.149794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.149944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.150083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.150114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.150248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.150278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.150380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.150411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.150527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.150567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.150709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.150740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.150876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.150908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.151035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.151065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.151211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.151242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.151345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.151376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.151511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.151543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.151663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.151694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.151865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.151896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.152034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.152065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.152203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.152234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.152368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.152400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.653 [2024-12-10 22:58:27.152517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.653 [2024-12-10 22:58:27.152556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.653 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.152705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.152736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.152861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.152973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.153038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.153175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.153241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.153424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.153476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.153663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.153718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.153847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.153878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.153992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.154023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.154125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.154161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.154271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.154302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.154411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.154442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.154570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.154603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.154741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.154772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.154875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.154906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.155016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.155047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.155178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.155210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.155340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.155371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.155479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.155512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.155657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.155692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.155826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.155858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.155986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.156019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.156150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.156181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.156323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.156354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.156525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.156564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.156682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.156714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.156816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.156849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.156961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.156993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.157160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.157192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.157327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.157358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.157462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.157494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.157635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.157667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.157801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.157832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.157966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.654 [2024-12-10 22:58:27.157997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.654 qpair failed and we were unable to recover it. 00:27:19.654 [2024-12-10 22:58:27.158120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.158151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.158289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.158320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.158434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.158466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.158605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.158637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.158751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.158783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.158923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.158954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.159092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.159123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.159216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.159248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.159353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.159384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.159493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.159524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.159667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.159699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.159803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.159833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.159947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.159978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.160118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.160149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.160284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.160315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.160448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.160486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.160599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.160631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.160735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.160766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.160886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.160919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.161081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.161113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.161240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.161272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.161376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.161408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.161524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.161563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.161733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.161764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.161928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.161959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.162091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.162122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.162234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.162265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.162405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.162436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.162592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.162623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.162768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.655 [2024-12-10 22:58:27.162799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.655 qpair failed and we were unable to recover it. 00:27:19.655 [2024-12-10 22:58:27.162937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.162969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.163079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.163110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.163240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.163271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.163406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.163437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.163562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.163593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.163742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.163774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.163907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.163938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.164076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.164107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.164269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.164301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.164434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.164465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.164576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.164608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.164754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.164785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.164903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.164935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.165076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.165107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.165267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.165298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.165413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.165444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.165565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.165597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.165730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.165761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.165901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.165932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.166028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.166060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.166194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.166226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.166366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.166396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.166537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.166586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.166686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.166716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.166858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.166890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.167001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.167038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.167171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.167202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.167330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.167360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.167503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.167534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.167684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.167715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.167849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.167882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.167996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.168028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.168189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.168221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.168360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.168391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.168533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.168570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.168709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.168740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.168877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.168909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.656 qpair failed and we were unable to recover it. 00:27:19.656 [2024-12-10 22:58:27.169019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.656 [2024-12-10 22:58:27.169050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.169184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.169215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.169366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.169397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.169505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.169537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.169676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.169707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.169813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.169844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.169970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.170001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.170108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.170140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.170268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.170299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.170409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.170441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.170576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.170608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.170734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.170765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.170910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.170942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.171079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.171110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.171247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.171278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.171447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.171479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.171588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.171622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.171728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.171759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.171894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.171925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.172058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.172090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.172190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.172222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.172359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.172390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.172521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.172560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.172698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.172730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.172833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.172864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.172971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.173002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.173129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.173160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.173271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.173305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.173411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.173443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.173612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.173644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.173750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.173781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.173915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.173946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.174051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.174082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.174225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.174257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.174394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.174425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.174529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.174580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.657 qpair failed and we were unable to recover it. 00:27:19.657 [2024-12-10 22:58:27.174726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.657 [2024-12-10 22:58:27.174757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.174914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.174945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.175083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.175114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.175225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.175256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.175392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.175423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.175530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.175571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.175691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.175722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.175889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.175920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.176024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.176055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.176190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.176221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.176321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.176354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.176471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.176503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.176623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.176655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.176761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.176793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.176898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.176930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.177068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.177100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.177240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.177271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.177409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.177441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.177541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.177580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.177721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.177758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.177855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.177886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.178019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.178051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.178210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.178241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.178409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.178452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.178586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.178644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.178826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.178857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.178999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.179030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.179142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.179195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.179328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.179360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.179489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.179521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.179666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.179698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.179802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.179834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.179966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.179997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.180102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.180134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.180254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.180286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.180396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.180428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.180555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.180588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.180700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.180732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.180835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.180867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.658 qpair failed and we were unable to recover it. 00:27:19.658 [2024-12-10 22:58:27.181035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.658 [2024-12-10 22:58:27.181067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.181170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.181201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.181332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.181363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.181467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.181498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.181608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.181640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.181780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.181812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.181916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.181947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.182085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.182116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.182229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.182262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.182373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.182404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.182507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.182537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.182667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.182699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.182807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.182838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.182970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.183001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.183119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.183150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.183269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.183300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.183462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.183494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.183677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.183710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.183842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.183873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.183977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.184008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.184146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.184183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.184293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.184325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.184461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.184492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.184639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.184671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.184811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.184842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.184969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.185001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.185111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.185143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.185256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.185288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.185387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.185418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.185555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.185588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.185721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.185754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.185873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.185904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.186043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.186074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.186231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.186262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.186370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.186402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.186506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.186538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.186657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.186689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.186825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.186856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.186957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.186988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.187098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.187129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.187294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.659 [2024-12-10 22:58:27.187325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.659 qpair failed and we were unable to recover it. 00:27:19.659 [2024-12-10 22:58:27.187432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.187464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.187595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.187628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.187741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.187772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.187906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.187937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.188094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.188125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.188286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.188317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.188420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.188452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.188568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.188600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.188733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.188764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.188877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.188908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.189043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.189073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.189217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.189357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.189388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.189522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.189563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.189669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.189702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.189852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.189884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.190014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.190045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.190145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.190176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.190311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.190342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.190448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.190486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.190612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.190644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.190773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.190806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.190922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.190953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.191118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.191150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.191281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.191313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.191408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.191461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.191608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.191640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.191775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.191806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.191910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.191942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.192075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.192106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.192238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.192269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.192430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.192461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.192596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.192628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.192733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.192765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.660 [2024-12-10 22:58:27.192897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.660 [2024-12-10 22:58:27.192929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.660 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.193063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.193095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.193235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.193266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.193394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.193427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.193588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.193620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.193784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.193815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.193923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.193956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.194063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.194095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.194224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.194256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.194391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.194423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.194559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.194591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.194726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.194757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.194920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.194951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.195064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.195096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.195229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.195261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.195371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.195402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.195500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.195532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.195679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.195710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.195840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.195871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.196010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.196041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.196159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.196190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.196294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.196326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.196454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.196485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.196632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.196665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.196779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.196810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.196949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.196987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.197091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.197123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.197226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.197258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.197388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.197419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.197563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.197596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.197705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.197736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.197898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.197929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.198096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.198127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.198268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.198299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.198431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.198463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.198565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.198597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.198709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.198740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.198885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.198916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.199042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.199074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.199210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.199242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.661 [2024-12-10 22:58:27.199349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.661 [2024-12-10 22:58:27.199380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.661 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.199494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.199526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.199687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.199718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.199878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.199910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.200044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.200075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.200215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.200246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.200354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.200386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.200519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.200562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.200698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.200730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.200837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.200868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.201004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.201035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.201196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.201227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.201369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.201400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.201538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.201717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.201781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.201989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.202029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.202159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.202217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.202351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.202406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.202568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.202600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.202700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.202731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.202840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.202872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.203035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.203066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.203175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.203206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.203339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.203371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.203495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.203526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.203675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.203712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.203871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.203902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.204064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.204095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.204235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.204267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.204372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.204403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.204501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.204532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.204673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.204807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.204838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.205003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.205035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.205170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.205202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.205310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.205341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.205441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.205472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.662 [2024-12-10 22:58:27.205616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.662 [2024-12-10 22:58:27.205648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.662 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.205779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.205810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.205920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.205952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.206058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.206090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.206201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.206232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.206340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.206371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.206505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.206536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.206675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.206706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.206822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.206853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.206991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.207021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.207150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.207181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.207274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.207305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.207472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.207503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.207647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.207679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.207791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.207822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.207991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.208022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.208157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.208188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.208320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.208351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.208476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.208507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.208618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.208650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.208797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.208829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.208993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.209024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.209138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.209169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.209295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.209327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.209421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.209452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.209565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.209597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.209692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.209723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.209883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.209913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.210052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.210197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.210228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.210360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.210392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.210557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.210589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.210687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.210718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.210881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.210912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.211047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.211078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.211205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.211236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.211371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.211402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.211564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.211596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.211723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.211754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.211849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.663 [2024-12-10 22:58:27.211882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.663 qpair failed and we were unable to recover it. 00:27:19.663 [2024-12-10 22:58:27.211994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.212026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.212166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.212197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.212316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.212347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.212457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.212489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.212636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.212668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.212763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.212794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.212930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.212961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.213093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.213124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.213256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.213288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.213399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.213431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.213562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.213594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.213759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.213789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.213915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.213947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.214050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.214081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.214189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.214221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.214364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.214396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.214510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.214542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.214673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.214705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.214810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.214841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.214974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.215005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.215130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.215161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.215295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.215325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.215439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.215470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.215596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.215628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.215739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.215770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.215901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.215933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.216039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.216094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.216253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.216284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.216428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.216474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.216610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.216641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.216752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.216783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.216892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.216924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.217054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.217109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.217270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.217302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.217436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.217467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.217574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.217750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.217798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.217950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.217981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.218116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.218148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.664 qpair failed and we were unable to recover it. 00:27:19.664 [2024-12-10 22:58:27.218240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.664 [2024-12-10 22:58:27.218270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.218378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.218428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.218598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.218629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.218740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.218771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.218901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.218932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.219095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.219127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.219255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.219287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.219434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.219466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.219574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.219606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.219714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.219747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.220085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.220116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.220255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.220286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.220420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.220452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.220555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.220597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.220702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.220733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.220853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.220885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.221048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.221080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.221208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.221239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.221362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.221394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.221521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.221558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.221697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.221729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.221861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.221893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.665 qpair failed and we were unable to recover it. 00:27:19.665 [2024-12-10 22:58:27.222001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.665 [2024-12-10 22:58:27.222032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.222128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.222158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.222320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.222352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.222484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.222515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.222627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.222658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.222762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.222794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.222930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.222967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.223080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.223111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.223250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.223280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.223416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.223448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.223576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.223608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.223709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.223741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.223844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.223875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.224005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.224036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.224168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.224200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.224328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.224358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.224467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.224497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.224613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.224646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.224754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.224787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.224950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.224981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.225087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.225118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.225249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.225281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.225409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.225439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.225577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.225609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.225700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.225731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.225917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.225948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.226057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.226089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.226256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.226286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.226417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.226449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.226559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.226592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.226733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.226764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.226907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.226938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.227041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.227072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.227213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.227245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.227356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.227388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.227525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.227565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.227702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.227733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.227894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.227925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.228032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.228063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.228197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.228228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.228355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.228386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.228489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.228520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.228648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.228679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.228779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.228810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.228944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.228976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.229080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.229111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.229243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.229279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.229379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.229411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.229554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.229596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.229722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.229754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.229896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.229928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.230066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.230200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.230232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.230333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.230365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.230505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.230536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.230651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.230683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.230798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.230834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.230967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.230998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.231151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.231184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.231357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.231394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.231533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.231573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.666 qpair failed and we were unable to recover it. 00:27:19.666 [2024-12-10 22:58:27.231716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.666 [2024-12-10 22:58:27.231748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.231889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.231920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.232021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.232054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.232178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.232209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.232320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.232351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.232495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.232528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.232652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.232684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.232814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.232846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.232982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.233013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.233186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.233218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.233353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.233384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.233517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.233565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.233713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.233744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.233847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.233878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.234034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.234065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.234193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.234223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.234379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.234416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.234541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.234581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.234720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.234751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.234861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.234911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.235084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.235122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.235255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.235286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.235415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.235446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.235562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.235600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.235726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.235757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.235859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.235896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.236036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.236067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.236198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.236230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.236367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.236398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.236538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.236595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.236717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.236748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.236890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.236921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.237033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.237065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.237204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.237235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.237338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.237370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.237499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.237530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.237683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.237715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.237851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.237883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.238043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.238073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.238200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.238238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.238357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.238409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.238531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.238570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.238682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.238714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.238826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.238858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.238990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.239021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.239160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.239191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.239289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.239322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.239451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.239483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.239591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.239623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.239738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.239770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.239938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.239969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.240101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.240132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.240244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.240275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.240389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.240420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.240571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.240602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.240736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.240767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.240904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.240936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.241077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.241108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.241240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.241270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.241367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.241399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.241507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.241538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.241680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.241711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.241817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.241848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.241957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.241989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.667 qpair failed and we were unable to recover it. 00:27:19.667 [2024-12-10 22:58:27.242103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.667 [2024-12-10 22:58:27.242134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.242230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.242269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.242363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.242393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.242528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.242565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.242706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.242737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.242847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.242879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.243004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.243035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.243153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.243185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.243282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.243314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.243519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.243557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.243697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.243729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.243879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.243913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.244033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.244066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.244158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.244190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.244299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.244330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.244458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.244491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.244672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.244706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.244859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.244892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.245027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.245060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.245175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.245208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.245316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.245348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.245474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.245508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.245649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.245684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.245801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.245835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.245951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.245985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.246128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.246162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.246330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.246364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.246479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.246513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.246681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.246716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.246826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.246860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.246985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.247019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.247161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.247195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.247337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.247372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.247495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.247531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.247684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.247718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.247839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.247874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.248012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.248045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.248170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.248206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.248318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.248354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.248473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.248508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.248671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.248707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.248852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.248894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.249038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.249074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.249195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.249230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.249375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.249411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.249534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.249578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.249721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.249756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.249905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.249941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.250057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.250092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.250204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.250239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.250348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.250383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.250513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.250556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.250675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.250710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.250860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.250896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.251004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.251041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.251216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.251252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.251364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.251400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.251518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.251561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.251724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.251760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.251912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.251948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.252121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.252157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.252300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.668 [2024-12-10 22:58:27.252336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.668 qpair failed and we were unable to recover it. 00:27:19.668 [2024-12-10 22:58:27.252445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.252480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.252670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.252707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.252825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.252861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.253037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.253072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.253223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.253258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.253435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.253471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.253585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.253621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.253764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.253799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.253915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.253951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.254080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.254116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.254283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.254318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.254487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.254524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.254707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.254743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.254891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.254927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.255070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.255107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.255273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.255309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.255444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.255482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.255661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.255698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.255851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.255886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.256030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.256066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.256246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.256281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.256437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.256473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.256597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.256635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.256776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.256811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.256964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.256999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.257158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.257194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.257342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.257377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.257558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.257604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.257778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.257814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.257960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.257996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.258106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.258140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.258250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.258285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.258398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.258433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.258575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.258612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.258737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.258772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.258913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.258949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.259085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.259120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.259233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.259270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.259420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.259456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.259601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.259637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.259760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.259795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.259942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.259977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.260098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.260134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.260250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.260285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.260429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.260464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.260622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.260659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.260808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.260850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.260995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.261030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.261150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.261185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.261311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.261347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.261490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.261525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.261655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.261692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.261853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.261889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.262051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.262088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.262237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.262272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.262425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.262461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.262614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.262650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.262799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.262835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.262979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.263015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.263138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.263173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.263321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.263357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.263540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.263585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.263737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.263774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.263907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.263942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.264090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.669 [2024-12-10 22:58:27.264125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.669 qpair failed and we were unable to recover it. 00:27:19.669 [2024-12-10 22:58:27.264312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.264349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.264541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.264586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.264708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.264744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.264898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.264933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.265061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.265097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.265238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.265273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.265428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.265465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.265624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.265662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.265832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.265869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.266051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.266088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.266199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.266236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.266383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.266420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.266551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.266595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.266719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.266756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.266939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.266975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.267124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.267162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.267277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.267315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.267469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.267506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.267646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.267684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.267792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.267830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.267959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.267997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.268149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.268196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.268378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.268416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.268562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.268601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.268737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.268775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.268890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.268928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.269081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.269118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.269247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.269283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.269468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.269507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.269669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.269706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.269830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.269868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.270014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.270051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.270195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.270233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.270380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.270417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.270543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.270590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.270761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.270799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.270991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.271028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.271183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.271219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.271346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.271383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.271501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.271540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.271688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.271725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.271857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.271894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.272045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.272083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.272231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.272268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.272403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.272442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.272585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.272624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.272805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.272842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.272993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.273031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.273208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.273247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.273379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.273569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.273608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.273733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.273770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.273956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.273993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.274111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.274150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.274333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.274370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.274492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.274529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.274688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.274726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.274865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.274902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.275027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.275066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.275220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.275259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.275378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.275415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.275567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.275616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.670 qpair failed and we were unable to recover it. 00:27:19.670 [2024-12-10 22:58:27.275739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.670 [2024-12-10 22:58:27.275778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.275902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.275939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.276066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.276103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.276219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.276257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.276385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.276422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.276535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.276600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.276791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.276830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.276975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.277012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.277188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.277224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.277326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.277364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.277522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.277567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.277731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.277768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.277889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.277926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.278087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.278124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.278246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.278284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.278411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.278448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.278612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.278650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.278813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.278850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.279008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.279046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.279182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.279219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.279341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.279379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.279495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.279533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.279705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.279743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.279905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.279943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.280057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.280094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.280242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.280280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.280399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.280437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.280567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.280604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.280725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.280762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.280910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.280948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.281139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.281176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.281309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.281347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.281495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.281532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.281674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.281711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.281860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.281898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.282013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.282050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.282165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.282202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.282387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.282425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.282570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.282609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.282754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.282797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.282983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.283020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.283180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.283216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.283405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.283444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.283570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.283608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.283720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.283759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.283886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.283925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.284081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.284118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.284272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.284310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.284430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.284468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.284610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.284647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.284769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.284806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.284927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.284964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.285088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.285126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.285324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.285362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.285496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.285663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.285700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.285866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.285904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.286054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.286092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.286247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.286285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.286399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.286437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.286562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.286600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.286719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.286757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.286880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.286917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.287071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.287108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.287259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.287296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.287454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.287491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.287661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.287699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.671 [2024-12-10 22:58:27.287829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.671 [2024-12-10 22:58:27.287866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.671 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.288015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.288052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.288172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.288209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.288420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.288458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.288620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.288658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.288808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.288845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.288994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.289031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.289162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.289202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.289363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.289400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.289511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.289557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.289716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.289754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.289894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.289930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.290087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.290130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.290279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.290316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.290430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.290467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.290615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.290653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.290776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.290815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.290966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.291005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.291158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.291197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.291324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.291361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.291487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.291526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.291704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.291741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.291852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.291889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.292077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.292126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.292287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.292324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.292506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.292543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.292683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.292723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.292844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.292881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.293064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.293122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.293274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.293311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.293439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.293477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.293646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.293685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.293787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.293824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.293945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.293983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.294144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.294181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.294350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.294388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.294518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.294563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.294714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.294751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.294862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.294900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.295063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.295101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.295217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.295256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.295418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.295456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.295563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.295601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.295749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.295786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.295939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.295975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.296093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.296130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.296278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.296316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.296440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.296478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.296613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.296651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.296832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.296870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.296985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.297022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.297161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.297199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.297308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.297351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.297479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.297516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.297680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.297717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.297878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.297915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.298101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.298139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.298273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.298310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.298489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.298525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.298675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.298713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.298837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.298875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.299028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.299066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.299247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.299284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.299415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.299453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.299577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.299619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.299733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.299770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.672 qpair failed and we were unable to recover it. 00:27:19.672 [2024-12-10 22:58:27.299932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.672 [2024-12-10 22:58:27.299970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.300088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.300126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.300245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.300284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.300411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.300449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.300600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.300638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.300802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.300839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.300994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.301030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.301147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.301184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.301328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.301365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.301500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.301538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.301695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.301733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.301925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.301963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.302116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.302154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.302337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.302376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.302491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.302529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.302704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.302743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.302876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.302913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.303076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.303113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.303263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.303300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.303445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.303483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.303684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.303723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.303877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.303914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.304050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.304088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.304208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.304248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.304374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.304411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.304595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.304635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.304758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.304805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.304955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.304993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.305151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.305189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.305358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.305397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.305563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.305601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.305779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.305835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.305950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.305988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.306142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.306179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.306335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.306373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.306473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.306507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.306648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.306675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.306769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.306795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.306918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.306956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.307071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.307108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.307245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.307282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.307399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.307437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.307589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.307616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.307706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.307732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.307875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.307913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.308061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.308098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.308265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.308303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.308471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.308498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.308610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.308638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.308756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.308783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.308907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.308952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.309066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.309104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.309232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.309270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.309415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.309478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.309609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.309642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.309779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.309807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.309973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.310019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.310189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.310230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.310377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.673 [2024-12-10 22:58:27.310434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.673 qpair failed and we were unable to recover it. 00:27:19.673 [2024-12-10 22:58:27.310608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.310637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.310761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.310789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.310875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.310907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.310995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.311021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.311108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.311140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.311295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.311356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.311523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.311571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.311689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.311715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.311813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.311840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.311932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.311959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.312097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.312134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.312284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.312338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.312452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.312491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.312675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.312702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.312807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.312851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.313066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.313103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.674 qpair failed and we were unable to recover it. 00:27:19.674 [2024-12-10 22:58:27.313236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.674 [2024-12-10 22:58:27.313272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.313434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.313472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.313642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.313670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.313808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.313834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.313917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.313945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.314135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.314171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.314292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.314329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.314486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.314511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.314635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.314660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.314773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.314798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.955 [2024-12-10 22:58:27.314976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.955 [2024-12-10 22:58:27.315006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.955 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.315133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.315165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.315299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.315346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.315484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.315515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.315659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.315686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.315776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.315803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.315914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.315940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.316050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.316077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.316223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.316261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.316405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.316438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.316572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.316600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.316718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.316744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.316893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.316930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.317052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.317089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.317219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.317267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.317424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.317461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.317620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.317647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.317739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.317765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.317850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.317876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.317959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.317985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.318094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.318120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.318213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.318266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.318440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.318478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.318615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.318641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.318725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.318751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.318877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.318915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.319090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.319116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.319203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.319249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.319400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.319443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.319534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.319567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.319667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.319706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.319825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.319852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.319938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.319964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.320084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.320111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.320193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.320219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.320310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.320337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.320423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.320449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.320531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.956 [2024-12-10 22:58:27.320568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-12-10 22:58:27.320724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.320750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.320832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.320858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.320936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.320962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.321961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.321992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.322080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.322106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.322256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.322295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.322452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.322490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.322629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.322656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.322744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.322771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.322894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.322930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.323047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.323084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.323236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.323274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.323438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.323474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.323626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.323654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.323744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.323770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.323920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.323956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.324080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.324116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.324262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.324311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.324466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.324502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.324666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.324694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.324789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.324815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.324986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.325023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.325169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.325206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.325370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.325406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.325561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.325609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.325728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.325754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.325877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.325923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.326074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.326117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.326246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.326285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.326467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.326503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-12-10 22:58:27.326645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.957 [2024-12-10 22:58:27.326672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.326783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.326809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.326950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.326987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.327132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.327183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.327321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.327348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.327486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.327522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.327649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.327675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.327790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.327816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.327925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.327951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.328079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.328115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.328301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.328337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.328454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.328481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.328578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.328605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.328687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.328718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.328836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.328887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.329037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.329073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.329210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.329246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.329397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.329432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.329603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.329631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.329749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.329775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.329933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.329982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.330153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.330190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.330377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.330415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.330557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.330593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.330717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.330743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.330863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.330900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.331038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.331077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.331255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.331292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.331427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.331464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.331624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.331650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.331764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.331791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.331882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.331908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.331995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.332039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.332190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.332229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.332388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.332430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.332601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.332627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.332734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.958 [2024-12-10 22:58:27.332761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.958 qpair failed and we were unable to recover it. 00:27:19.958 [2024-12-10 22:58:27.332907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.332944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.333064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.333101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.333269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.333308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.333440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.333478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.333646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.333673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.333770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.333797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.333905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.333947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.334061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.334088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.334229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.334266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.334431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.334469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.334608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.334634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.334748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.334774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.334861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.334906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.335046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.335083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.335250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.335286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.335407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.335443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.335555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.335596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.335700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.335726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.335813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.335839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.335923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.335950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.336046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.336093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.336247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.336282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.336485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.336521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.336692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.336718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.336832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.336868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.337026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.337061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.337299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.337334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.337482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.337517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.337679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.337718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.337847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.337889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.338090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.338117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.338214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.338240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.338352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.338399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.338526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.338573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.338709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.959 [2024-12-10 22:58:27.338740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.959 qpair failed and we were unable to recover it. 00:27:19.959 [2024-12-10 22:58:27.338818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.338844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.338929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.338954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.339072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.339123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.339255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.339307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.339492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.339529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.339687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.339713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.339807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.339834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.339956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.339983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.340081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.340107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.340276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.340315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.340480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.340518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.340687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.340725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.340872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.340909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.341066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.341104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.341267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.341304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.341430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.341466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.341630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.341669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.341832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.341869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.341989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.342026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.342190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.342226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.342358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.342404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.342517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.342572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.342721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.342759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.342943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.342984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.343127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.343163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.343285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.343325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.343481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.343519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.343665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.343701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.343900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.960 [2024-12-10 22:58:27.343937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-12-10 22:58:27.344087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.344124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.344268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.344312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.344463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.344505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.344670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.344709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.344834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.344871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.344996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.345034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.345212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.345250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.345416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.345453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.345621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.345665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.345783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.345819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.345977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.346013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.346157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.346194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.346355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.346401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.346567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.346608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.346776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.346890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.346926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.347059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.347096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.347254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.347291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.347484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.347520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.347674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.347711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.347842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.347887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.348081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.348118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.348270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.348306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.348427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.348466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.348587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.348632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.348816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.348853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.349023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.349061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.349190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.349227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.349360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.349399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.349572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.349609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.349736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.349773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.349926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.349962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.350114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.350158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.350311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.350348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-12-10 22:58:27.350506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.961 [2024-12-10 22:58:27.350543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.350712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.350750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.350877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.350914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.351041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.351078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.351210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.351249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.351391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.351428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.351541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.351595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.351783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.351819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.351971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.352016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.352138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.352175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.352318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.352354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.352511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.352556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.352702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.352739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.352900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.352935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.353064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.353102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.353246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.353285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.353451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.353488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.353612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.353650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.353762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.353805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.353942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.353980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.354089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.354124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.354287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.354323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.354450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.354496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.354633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.354673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.354809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.354845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.355031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.355068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.355197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.355235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.355422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.355459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.355561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.355600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.355729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.355766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.355891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.355928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.356059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.356098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.356253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.356291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-12-10 22:58:27.356524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.962 [2024-12-10 22:58:27.356596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.356758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.356795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.356913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.356951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.357134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.357172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.357329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.357366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.357514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.357570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.357739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.357776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.357964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.358001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.358161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.358198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.358325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.358362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.358501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.358539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.358704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.358741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.358871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.358908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.359030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.359067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.359216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.359253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.359411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.359448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.359604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.359642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.359787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.359823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.359973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.360010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.360173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.360209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.360383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.360440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.360593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.360631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.360819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.360886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.361067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.361329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.361398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.361574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.361610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.361755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.361797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.361956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.362117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.362152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.362292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.362331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.362501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.362541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.362741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.362781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.362960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.362999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.363160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.363208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.363332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.963 [2024-12-10 22:58:27.363372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.963 qpair failed and we were unable to recover it. 00:27:19.963 [2024-12-10 22:58:27.363532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.363579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.363754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.363795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.363951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.363990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.364197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.364236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.364394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.364444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.364602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.364641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.364770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.364808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.365009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.365047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.365212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.365250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.365361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.365401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.365568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.365614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.365760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.365803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.365935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.365974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.366172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.366210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.366336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.366376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.366519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.366568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.366770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.366810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.367009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.367047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.367205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.367246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.367370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.367408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.367553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.367593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.367715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.367754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.367881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.367920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.368060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.368101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.368284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.368323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.368449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.368487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.368641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.368682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.368848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.368886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.368991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.369030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.369183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.369220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.369365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.369403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.369575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.369622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.369762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.369800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.369930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.369969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.964 [2024-12-10 22:58:27.370114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.964 [2024-12-10 22:58:27.370153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.964 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.370291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.370329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.370482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.370521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.370672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.370711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.370898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.370940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.371129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.371169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.371327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.371366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.371500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.371551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.371729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.371768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.371929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.371968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.372115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.372153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.372286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.372325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.372482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.372529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.372689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.372727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.372858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.372896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.373028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.373068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.373227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.373271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.373438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.373478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.373646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.373686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.373851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.373890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.374031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.374070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.374223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.374264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.374449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.374489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.374681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.374727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.374927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.374971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.375184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.375229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.375389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.375447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.375679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.375725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.375865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.375953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.376214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.376260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.376444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.376492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.965 [2024-12-10 22:58:27.376681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.965 [2024-12-10 22:58:27.376733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.965 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.376965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.377016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.377225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.377270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.377491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.377539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.377726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.377772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.377974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.378021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.378261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.378306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.378508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.378593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.378766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.378811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.378985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.379039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.379286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.379526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.379610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.379794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.379832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.379994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.380033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.380197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.380237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.380375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.380413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.380585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.380629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.380800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.380841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.381045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.381085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.381237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.381278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.381406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.381447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.381659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.381700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.381873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.381913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.382083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.382130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.382314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.382355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.382514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.382577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.382756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.382798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.383008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.383048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.383225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.383265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.383410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.383449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.383629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.383674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.383833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.383873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.384042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.384089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.384258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.384299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.384465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.966 [2024-12-10 22:58:27.384505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.966 qpair failed and we were unable to recover it. 00:27:19.966 [2024-12-10 22:58:27.384676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.384717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.384920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.384960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.385136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.385178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.385365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.385411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.385615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.385661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.385849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.385895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.386083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.386131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.386343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.386388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.386604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.386650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.386831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.386876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.387088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.387133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.387349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.387393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.387554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.387618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.387818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.387860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.388009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.388050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.388219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.388259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.388387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.388427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.388595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.388642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.388773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.388814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.388986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.389027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.389189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.389229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.389401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.389443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.389609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.389651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.389824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.389865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.390040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.390080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.390211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.390254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.390405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.390446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.390614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.390657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.390823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.390871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.391068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.391108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.391308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.391356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.391530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.391590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.391737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.391777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.391945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.391985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.967 [2024-12-10 22:58:27.392122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.967 [2024-12-10 22:58:27.392162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.967 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.392314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.392356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.392499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.392540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.392727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.392767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.392934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.392976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.393106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.393147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.393312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.393352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.393484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.393525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.393704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.393744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.393919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.393967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.394144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.394184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.394371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.394445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.394659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.394701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.394913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.394954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.395123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.395164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.395302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.395345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.395496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.395540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.395720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.395761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.395892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.395937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.396180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.396228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.396429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.396478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.396725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.396776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.396978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.397027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.397301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.397366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.397562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.397613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.397764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.397817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.397997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.398036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.398202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.398244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.398407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.398448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.398651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.398704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.398921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.398964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.399130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.399179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.399329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.399371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.399555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.399598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.399766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.399809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.968 qpair failed and we were unable to recover it. 00:27:19.968 [2024-12-10 22:58:27.399992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.968 [2024-12-10 22:58:27.400037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.400175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.400217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.400412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.400457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.400598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.400642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.400788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.400830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.401015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.401059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.401211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.401254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.401415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.401458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.401635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.401679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.401851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.401902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.402086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.402128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.402271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.402333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.402565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.402611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.402766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.402830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.403052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.403098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.403291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.403336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.403526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.403602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.403748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.403790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.403973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.404018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.404187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.404229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.404463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.404508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.404707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.404750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.404929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.404972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.405151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.405204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.405357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.405401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.405576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.405619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.405831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.405875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.406050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.406094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.406252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.406306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.406509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.406610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.406828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.406873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.407015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.407059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.407232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.407275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.407444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.407488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.969 [2024-12-10 22:58:27.407684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.969 [2024-12-10 22:58:27.407729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.969 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.407906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.407950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.408120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.408163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.408365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.408418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.408665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.408710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.408881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.408926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.409092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.409136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.409308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.409352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.409514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.409567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.409780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.409832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.409980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.410023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.410156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.410200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.410415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.410487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.410728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.410786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.410982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.411036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.411267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.411321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.411522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.411576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.411775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.411820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.412005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.412076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.412260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.412305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.412443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.412505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.412747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.412794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.413043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.413088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.413314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.413356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.413528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.413591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.413722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.413770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.413962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.414005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.414169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.414213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.414388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.414431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.414605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.414649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.970 [2024-12-10 22:58:27.414823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.970 [2024-12-10 22:58:27.414867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.970 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.415080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.415122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.415268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.415320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.415478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.415522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.415701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.415753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.415941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.415985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.416169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.416211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.416376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.416430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.416635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.416679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.416849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.416892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.417079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.417123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.417267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.417310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.417492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.417534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.417710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.417753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.417933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.417977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.418183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.418226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.418437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.418482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.418679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.418736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.418946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.418989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.419154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.419204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.419404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.419472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.419672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.419723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.419894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.419964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.420157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.420212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.420431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.420486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.420733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.420823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.421036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.421133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.421428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.421487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.421706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.421766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.422032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.422116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.422359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.422413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.422596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.422642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.422811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.422854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.423027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.423074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.423242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.423290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.971 qpair failed and we were unable to recover it. 00:27:19.971 [2024-12-10 22:58:27.423474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.971 [2024-12-10 22:58:27.423520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.423727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.423789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.423971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.424018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.424195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.424244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.424426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.424472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.424669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.424718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.424900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.424946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.425169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.425215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.425413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.425461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.425610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.425672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.425838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.425884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.426114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.426393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.426440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.426705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.426760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.426956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.427019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.427269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.427482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.427537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.427764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.427809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.427970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.428017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.428213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.428260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.428478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.428530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.428769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.428835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.429040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.429086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.429293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.429340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.429585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.429646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.429815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.429862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.430059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.430108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.430301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.430578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.430649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.430843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.430890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.431094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.431139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.431332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.431378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.431566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.431615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.431767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.431983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.972 [2024-12-10 22:58:27.432029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.972 qpair failed and we were unable to recover it. 00:27:19.972 [2024-12-10 22:58:27.432238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.432286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.432520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.432607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.432795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.432840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.432970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.433016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.433231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.433280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.433455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.433501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.433699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.433746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.433918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.433967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.434142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.434189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.434356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.434403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.434628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.434676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.434893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.434941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.435140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.435188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.435407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.435453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.435619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.435687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.435850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.435898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.436119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.436165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.436348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.436395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.436565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.436613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.436770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.436819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.437015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.437064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.437254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.437315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.437480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.437530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.437781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.437831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.438068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.438117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.438351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.438401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.438560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.438611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.438811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.438863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.439113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.439176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.439344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.439623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.439673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.439852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.439902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.440095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.440146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.440376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.440432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.973 [2024-12-10 22:58:27.440653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.973 [2024-12-10 22:58:27.440705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.973 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.440938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.441001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.441190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.441239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.441490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.441560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.441783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.441836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.442044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.442094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.442256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.442306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.442479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.442530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.442723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.442780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.442993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.443044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.443253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.443302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.443565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.443638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.443853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.443904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.444082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.444131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.444361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.444411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.444580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.444633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.444805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.444855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.445048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.445096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.445332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.445392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.445579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.445630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.445866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.445923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.446116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.446164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.446380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.446432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.446641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.446691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.446890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.446940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.447095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.447144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.447362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.447414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.447618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.447670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.447824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.447874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.448111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.448163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.448371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.448420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.448644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.448694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.448864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.448913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.449088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.449140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.449351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.449400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.974 [2024-12-10 22:58:27.449563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.974 [2024-12-10 22:58:27.449620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.974 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.449833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.449887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.450109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.450160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.450398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.450448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.450650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.450703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.450857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.450907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.451057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.451108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.451284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.451334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.451466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.451514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.451759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.451820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.452003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.452052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.452254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.452303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.452523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.452590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.452809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.452860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.453018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.453067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.453272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.453321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.453495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.453589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.453782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.453833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.454000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.454051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.454260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.454309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.454491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.454595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.454758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.454807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.455000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.455050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.455248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.455298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.455488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.455565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.455727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.455784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.455952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.456002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.456236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.456285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.456470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.456520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.456767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.456816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.457042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.457090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.975 qpair failed and we were unable to recover it. 00:27:19.975 [2024-12-10 22:58:27.457298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.975 [2024-12-10 22:58:27.457349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.457497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.457560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.457736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.457787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.458015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.458069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.458248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.458298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.458463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.458511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.458717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.458766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.458996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.459051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.459283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.459334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.459590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.459641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.459857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.459917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.460099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.460150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.460382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.460432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.460635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.460686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.460887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.460938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.461118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.461167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.461343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.461394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.461570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.461620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.461823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.461873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.462068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.462118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.462352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.462404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.462621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.462679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.462914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.462970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.463211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.463263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.463481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.463536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.463776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.463831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.464024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.464075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.464282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.464334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.464596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.464655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.464889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.465111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.465162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.465341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.465394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.465602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.465657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.465880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.465934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.466094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.466155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.976 qpair failed and we were unable to recover it. 00:27:19.976 [2024-12-10 22:58:27.466360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.976 [2024-12-10 22:58:27.466411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.466673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.466729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.466977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.467030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.467264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.467315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.467483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.467538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.467768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.467824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.468068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.468120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.468336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.468395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.468650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.468704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.469005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.469167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.469219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.469480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.469534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.469758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.469810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.470054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.470107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.470281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.470345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.470567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.470622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.470780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.470833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.471075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.471131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.471412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.471466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.471650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.471705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.471909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.471962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.472143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.472206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.472426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.472480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.472678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.472732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.472941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.472993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.473238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.473294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.473471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.473526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.473726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.473779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.473989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.474043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.474221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.474273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.474450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.474501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.474716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.474770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.474991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.475052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.475266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.475318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.475500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.475568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.977 [2024-12-10 22:58:27.475783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.977 [2024-12-10 22:58:27.475836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.977 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.476045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.476098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.476298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.476351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.476586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.476640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.476867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.476940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.477112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.477165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.477363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.477439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.477671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.477735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.477936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.478015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.478193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.478268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.478572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.478645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.478875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.478934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.479208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.479262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.479444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.479505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.479746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.479799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.479982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.480034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.480215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.480269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.480509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.480596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.480782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.480835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.481042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.481108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.481348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.481402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.481610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.481665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.481822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.481876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.482055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.482123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.482371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.482424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.482625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.482679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.482873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.482927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.483156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.483211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.483410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.483461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.483689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.483744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.483922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.483980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.484152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.484204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.484426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.484478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.484669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.484724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.484945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.485007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.485212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.978 [2024-12-10 22:58:27.485266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.978 qpair failed and we were unable to recover it. 00:27:19.978 [2024-12-10 22:58:27.485474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.485527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.485795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.485851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.486083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.486137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.486375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.486427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.486572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.486626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.486806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.486864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.487115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.487170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.487388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.487448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.487651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.487729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.487953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.488006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.488228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.488281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.488491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.488572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.488804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.488858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.489103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.489155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.489398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.489450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.489722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.489786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.489977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.490055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.490316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.490376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.490607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.490663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.490887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.490942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.491174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.491226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.491430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.491481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.491719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.491775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.492014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.492066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.492305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.492357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.492594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.492661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.492917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.492969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.493212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.493263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.493477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.493532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.493719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.493772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.494010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.494067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.494300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.494357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.494608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.494668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.494876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.494932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.979 [2024-12-10 22:58:27.495166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.979 [2024-12-10 22:58:27.495222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.979 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.495407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.495473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.495727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.495786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.496037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.496092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.496324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.496390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.496627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.496685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.496903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.496959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.497188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.497244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.497436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.497495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.497709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.497768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.498022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.498077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.498335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.498394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.498687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.498748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.498961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.499016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.499205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.499269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.499457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.499515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.499800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.499858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.500079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.500137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.500400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.500469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.500683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.500742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.501003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.501060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.501280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.501335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.501578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.501638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.501866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.501924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.502180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.502236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.502452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.502518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.502743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.502800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.503065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.503120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.503398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.503466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.503712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.503772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.504033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.504089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.980 qpair failed and we were unable to recover it. 00:27:19.980 [2024-12-10 22:58:27.504323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.980 [2024-12-10 22:58:27.504387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.504605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.504665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.504925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.504981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.505171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.505228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.505482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.505542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.505824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.505883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.506151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.506208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.506508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.506594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.506855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.506914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.507109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.507165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.507407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.507463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.507699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.507769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.508002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.508059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.508275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.508332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.508592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.508656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.508947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.509005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.509221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.509277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.509498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.509570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.509781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.509839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.510061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.510116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.510382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.510442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.510679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.510747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.510979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.511036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.511265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.511331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.511576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.511635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.511880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.511938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.512144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.512200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.512406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.512462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.512752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.512824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.513073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.513129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.513338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.513394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.513604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.513663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.513923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.513980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.514247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.514303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.981 [2024-12-10 22:58:27.514492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.981 [2024-12-10 22:58:27.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.981 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.514798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.514867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.515053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.515109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.515279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.515336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.515566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.515626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.515880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.516193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.516249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.516515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.516604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.516876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.516935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.517162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.517218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.517421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.517476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.517755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.517814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.518095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.518152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.518364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.518419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.518631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.518689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.518968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.519211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.519267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.519477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.519533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.519822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.519889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.520144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.520204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.520439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.520502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.520754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.520818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.521065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.521128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.521377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.521438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.521708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.521771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.521971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.522031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.522281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.522342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.522635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.522699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.522941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.523239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.523315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.523602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.523665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.523902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.523964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.524231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.524290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.524526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.524605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.524856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.982 [2024-12-10 22:58:27.524917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.982 qpair failed and we were unable to recover it. 00:27:19.982 [2024-12-10 22:58:27.525160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.525221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.525466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.525535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.525777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.525837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.526067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.526127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.526358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.526419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.526676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.526748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.527026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.527089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.527329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.527389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.527690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.527754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.528034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.528094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.528309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.528369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.528646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.528713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.528999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.529061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.529293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.529353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.529533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.529613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.529819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.529882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.530129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.530191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.530385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.530445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.530700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.530763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.531031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.531095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.531348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.531408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.531632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.531705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.531946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.532007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.532301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.532363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.532611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.532695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.532984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.533043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.533269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.533331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.533573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.533638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.533919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.533979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.534218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.534278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.534485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.534571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.534862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.534924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.535168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.535229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.535427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.535488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.535733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.983 [2024-12-10 22:58:27.535796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.983 qpair failed and we were unable to recover it. 00:27:19.983 [2024-12-10 22:58:27.536080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.536141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.536337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.536398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.536660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.536726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.537017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.537076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.537323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.537382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.537687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.537771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.538038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.538097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.538321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.538381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.538660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.538735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.538973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.539033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.539240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.539300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.539518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.539598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.539877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.539958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.540250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.540311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.540579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.540641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.540917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.540979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.541249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.541310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.541563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.541628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.541904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.541978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.542213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.542273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.542494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.542572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.542835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.542901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.543178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.543241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.543512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.543590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.543841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.543900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.544144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.544207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.544400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.544471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.544775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.544838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.545115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.545191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.545446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.545508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.545773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.545834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.546106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.546175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.546408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.546472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.546792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.546873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.984 [2024-12-10 22:58:27.547168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.984 [2024-12-10 22:58:27.547248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.984 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.547529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.547612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.547851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.547911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.548129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.548190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.548448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.548521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.548793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.548875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.549187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.549264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.549539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.549619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.549895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.549977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.550288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.550366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.550660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.550740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.551034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.551116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.551351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.551411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.551644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.551724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.552022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.552101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.552380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.552440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.552678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.552758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.553060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.553138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.553340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.553404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.553726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.553807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.554047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.554126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.554339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.554399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.554648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.554731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.555019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.555099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.555290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.555350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.555607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.555689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.555978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.556059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.556321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.556384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.556659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.556722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.556915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.556988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.557276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.985 [2024-12-10 22:58:27.557336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.985 qpair failed and we were unable to recover it. 00:27:19.985 [2024-12-10 22:58:27.557603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.557666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.557874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.557948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.558242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.558305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.558510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.558591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.558843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.558905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.559106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.559176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.559428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.559488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.559735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.559799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.560043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.560105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.560375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.560439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.560687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.560751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.561054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.561132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.561379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.561442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.561721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.561802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.562099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.562176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.562436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.562512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.562780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.562864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.563080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.563158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.563438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.563498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.563817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.563898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.564186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.564265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.564539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.564629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.564924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.565006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.565295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.565376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.565680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.565759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.566018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.566097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.566392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.566455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.566749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.566814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.567107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.567187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.567465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.567527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.567830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.567894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.568198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.568276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.568564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.568628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.568896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.568974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.986 [2024-12-10 22:58:27.569219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.986 [2024-12-10 22:58:27.569298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.986 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.569584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.569648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.569960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.570041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.570300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.570384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.570578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.570640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.570911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.570994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.571385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.571641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.571733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.572025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.572118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.572363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.572424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.572726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.572808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.573060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.573138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.573414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.573476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.573760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.573841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.574161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.574238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.574486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.574571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.574777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.574841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.575021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.575081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.575391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.575470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.575776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.575861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.576169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.576247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.576495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.576571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.576854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.576935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.577179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.577258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.577466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.577529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.577861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.577939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.578218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.578309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.578570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.578634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.578869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.578930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.579203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.579279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.579583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.579649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.579911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.579989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.580218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.580298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.580541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.580648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.580952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.581031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.987 qpair failed and we were unable to recover it. 00:27:19.987 [2024-12-10 22:58:27.581286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.987 [2024-12-10 22:58:27.581363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.581642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.581723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.582004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.582085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.582325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.582388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.582698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.582779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.583052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.583129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.583364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.583424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.583682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.583745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.583999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.584075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.584300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.584363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.584572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.584635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.584875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.584957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.585199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.585269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.585467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.585531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.585820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.585898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.586176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.586255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.586438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.586499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.586826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.586907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.587161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.587239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.587488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.587570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.587873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.587955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.588211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.588289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.588520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.588620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.588869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.588949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.589213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.589294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.589520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.589622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.589935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.590013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.590289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.590359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.590576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.590640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.590902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.590982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.591252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.591329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.591628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.591713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.591969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.592047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.592357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.592417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.592722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.988 [2024-12-10 22:58:27.592804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.988 qpair failed and we were unable to recover it. 00:27:19.988 [2024-12-10 22:58:27.593071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.593152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.593394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.593453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.593768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.593852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.594154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.594233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.594528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.594608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.594869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.594946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.595192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.595272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.595531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.595609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.595927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.596005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.596307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.596387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.596686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.596769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.596988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.597065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.597345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.597423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.597673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.597753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.598046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.598126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.598325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.598387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.598634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.598716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.598979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.599076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.599331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.599393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.599666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.599728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.599952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.600014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.600286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.600349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.600585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.600648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.600878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.600962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.601197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.601261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.601525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.601613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.601886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.601966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.602219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.602296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.602541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.602630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.602915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.602976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.603232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.603293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.603526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.603603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.603945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.604027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.604250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.604327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.989 [2024-12-10 22:58:27.604627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.989 [2024-12-10 22:58:27.604707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.989 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.604946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.605012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.605279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.605340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.605586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.605648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.605856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.605934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.606242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.606323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.606640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.606720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.606981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.607059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.607301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.607364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.607636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.607699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.607992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.608053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.608255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.608328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.608580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.608645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.608947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.609025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.609285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.609348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.609624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.609708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.609955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.610016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.610200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.610259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.610536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.610617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.610841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.610904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.611108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.611169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.611401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.611462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.611728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.611797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.612067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.612139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.612420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.612480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.612752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.612839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.613150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.613213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.613412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.613473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.613723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.613785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.614060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.614141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.614390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.614452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.990 qpair failed and we were unable to recover it. 00:27:19.990 [2024-12-10 22:58:27.614744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.990 [2024-12-10 22:58:27.614805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.615064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.615157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.615413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.615476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.615801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.615882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.616153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.616214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.616469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.616533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.616849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.616910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.617220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.617299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.617533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.617634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.617873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.617933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.618171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.618233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.618422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.618483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.618825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.618907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.619205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.619283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.619574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.619638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.619929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.620009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.620239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.620321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.620598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.620660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.620881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.620960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.621301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.621383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.621641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.621723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.621974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.622052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.622243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.622317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.622646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.622728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.622958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.623018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.623223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.623282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.623567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.623632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.623913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.623991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.624248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.624308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.624563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.624637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.624876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.624956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.625186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.625448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.625521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.625803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.625868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.626101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.991 [2024-12-10 22:58:27.626163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.991 qpair failed and we were unable to recover it. 00:27:19.991 [2024-12-10 22:58:27.626376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.626437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.626711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.626785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.627052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.627114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.627352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.627413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.627700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.627764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.628020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.628104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.628443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.628759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.628840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.629113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.629194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.629483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.629579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.629852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.629932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.630243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.630321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.630582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.630647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.630847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.630923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.631230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.631308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.631582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.631652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.631983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.632064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.632356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.632435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.632706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.632786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.633045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.633126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.633373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.633434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.633695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.633776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.634041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.634118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.634335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.634399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.634686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.634767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.635069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.635146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.635421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.635771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.635852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.636159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.636237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.636486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.636564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.636870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.636951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.637248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.637327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.637622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.637718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.638052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.638131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.638404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.992 [2024-12-10 22:58:27.638464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.992 qpair failed and we were unable to recover it. 00:27:19.992 [2024-12-10 22:58:27.638706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.638786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.639082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.639145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.639379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.639449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.639716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.639796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.640088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.640160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.640449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.640510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.640832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.640910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.641122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.641211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.641489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.641578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.641847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.641926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.642197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.642258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.642472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.642534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.642860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.642938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.643230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.643310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.643574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.643642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.643918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.643997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.644272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.644352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.644592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.644657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.644894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.644975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.645192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.645269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.645569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.645632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.645902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.645993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.646270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.646351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.646597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.646660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.646930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.647008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.647296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.647375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.647632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.647713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.647980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.648058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.648355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.648436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.648794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.648876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.649158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.649218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.649447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.649507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.649814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.649893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.650183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.650263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.650458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.650519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.993 [2024-12-10 22:58:27.650877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.993 [2024-12-10 22:58:27.650948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.993 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.651157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.651236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.651515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.651595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.651907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.652000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.652283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.652360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.652651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.652731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.653039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.653114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.653372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.653445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.653725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.653806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.654065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.654144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.654433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.654495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.654866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.655132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.655209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.655431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.655499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.655823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.655906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.656169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.656247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.656488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.656564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.656832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.656911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.657159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.657241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.657527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.657606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.657872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.657958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.658248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.658334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.658604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.658668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.658916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.658994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.659298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.659378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.659645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.659726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.660009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.660087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.660272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.660344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.660574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.660639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.660884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.660962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.661244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.661305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.661518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.661601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.661875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.661935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.662212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.662273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.662482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.662591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.994 [2024-12-10 22:58:27.662838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.994 [2024-12-10 22:58:27.662921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.994 qpair failed and we were unable to recover it. 00:27:19.995 [2024-12-10 22:58:27.663221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.995 [2024-12-10 22:58:27.663300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.995 qpair failed and we were unable to recover it. 00:27:19.995 [2024-12-10 22:58:27.663533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.995 [2024-12-10 22:58:27.663623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.995 qpair failed and we were unable to recover it. 00:27:19.995 [2024-12-10 22:58:27.663901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.995 [2024-12-10 22:58:27.663980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.995 qpair failed and we were unable to recover it. 00:27:19.995 [2024-12-10 22:58:27.664306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.995 [2024-12-10 22:58:27.664368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.995 qpair failed and we were unable to recover it. 00:27:19.995 [2024-12-10 22:58:27.664625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.995 [2024-12-10 22:58:27.664707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:19.995 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.664988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.271 [2024-12-10 22:58:27.665070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.271 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.665350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.271 [2024-12-10 22:58:27.665411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.271 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.665641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.271 [2024-12-10 22:58:27.665722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.271 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.666041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.271 [2024-12-10 22:58:27.666118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.271 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.666372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.271 [2024-12-10 22:58:27.666434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.271 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.666754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.271 [2024-12-10 22:58:27.666836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.271 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.667101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.271 [2024-12-10 22:58:27.667189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.271 qpair failed and we were unable to recover it. 00:27:20.271 [2024-12-10 22:58:27.667464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.667526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.667783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.667864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.668141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.668218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.668447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.668524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.668740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.668803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.669095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.669156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.669400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.669461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.669786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.669868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.670150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.670229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.670455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.670517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.670793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.670868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.671161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.671221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.671498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.671583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.671878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.671951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.672211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.672273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.672508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.672588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.672834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.673182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.673270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.673569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.673632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.673826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.673886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.674105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.674186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.674397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.674458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.674728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.674791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.675065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.675135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.675380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.675442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.675667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.675748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.676015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.676096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.676308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.676383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.676647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.676728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.676912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.676973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.677194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.677256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.677457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.677519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.677819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.677880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.678109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.678176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.678449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.272 [2024-12-10 22:58:27.678512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.272 qpair failed and we were unable to recover it. 00:27:20.272 [2024-12-10 22:58:27.678794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.678855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.679109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.679186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.679464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.679527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.679814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.679896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.680137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.680225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.680496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.680587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.680914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.680992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.681263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.681343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.681569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.681633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.681945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.682027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.682328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.682407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.682716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.682797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.683069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.683150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.683344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.683406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.683667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.683748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.683953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.684033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.684314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.684377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.684584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.684649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.684874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.684951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.685240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.685301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.685572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.685636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.685852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.685931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.686203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.686280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.686477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.686538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.686830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.686911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.687128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.687207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.687486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.687574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.687899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.687991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.688278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.688358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.688627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.688709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.688965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.689043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.689341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.689405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.689691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.689754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.689997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.690075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.273 [2024-12-10 22:58:27.690308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.273 [2024-12-10 22:58:27.690382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.273 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.690679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.690761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.691015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.691106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.691343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.691404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.691663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.691744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.692010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.692089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.692265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.692327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.692593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.692674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.692997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.693078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.693288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.693351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.693585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.693649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.693927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.694282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.694343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.694654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.694735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.694996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.695084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.695340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.695402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.695646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.695708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.695944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.696006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.696247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.696310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.696562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.696627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.696906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.696966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.697291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.697382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.697655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.697736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.697984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.698063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.698345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.698418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.698757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.698838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.699063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.699141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.699366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.699425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.699697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.699778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.700003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.700083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.700308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.700368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.700591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.700655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.700940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.701003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.701296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.701357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.701589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.701651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.701882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.274 [2024-12-10 22:58:27.701972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.274 qpair failed and we were unable to recover it. 00:27:20.274 [2024-12-10 22:58:27.702236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.702298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.702541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.702639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.702861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.702943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.703252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.703315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.703507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.703599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.703896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.703957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.704237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.704313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.704647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.704730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.705028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.705105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.705388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.705450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.705796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.705877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.706181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.706260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.706521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.706607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.706955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.707034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.707333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.707411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.707677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.707758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.708067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.708129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.708336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.708397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.708650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.708730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.708931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.709020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.709285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.709349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.709573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.709636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.709873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.709933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.710158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.710218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.710519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.710601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.710860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.710939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.711168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.711229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.711462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.711537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.711855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.711934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.712221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.712304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.712491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.712575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.712907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.712986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.713262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.713341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.713626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.713710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.713999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.275 [2024-12-10 22:58:27.714080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.275 qpair failed and we were unable to recover it. 00:27:20.275 [2024-12-10 22:58:27.714267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.714327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.714599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.714680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.714909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.714987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.715286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.715347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.715662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.715744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.716041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.716120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.716409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.716481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.716729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.716791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.717093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.717171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.717402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.717465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.717778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.717857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.718131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.718209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.718440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.718503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.718788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.718875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.719138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.719219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.719434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.719497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.719725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.719788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.720067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.720130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.720412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.720472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.720786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.720867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.721204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.721286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.721529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.721612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.721886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.721964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.722270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.722350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.722587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.722650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.722951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.723029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.723306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.723397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.723688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.723770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.723978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.724055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.724326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.276 [2024-12-10 22:58:27.724386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.276 qpair failed and we were unable to recover it. 00:27:20.276 [2024-12-10 22:58:27.724681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.724764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.724986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.725064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.725298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.725361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.725612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.725677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.725954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.726016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.726223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.726285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.726486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.726561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.726837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.726902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.727205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.727268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.727568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.727631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.727885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.727955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.728239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.728321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.728577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.728641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.728906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.728983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.729283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.729362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.729615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.729697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.729957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.730030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.730273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.730335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.730597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.730661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.730974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.731052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.731311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.731372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.731633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.731720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.731998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.732060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.732307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.732366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.732664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.732751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.733053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.733132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.733374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.733434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.733708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.733769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.734071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.734159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.734364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.734424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.734733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.734811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.735081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.735158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.735398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.735470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.735712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.735792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.736052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.277 [2024-12-10 22:58:27.736130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.277 qpair failed and we were unable to recover it. 00:27:20.277 [2024-12-10 22:58:27.736314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.736375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.736574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.736639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.736942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.737021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.737258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.737319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.737614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.737706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.737957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.738037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.738243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.738305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.738561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.738626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.738893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.738960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.739207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.739267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.739510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.739603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.739851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.739911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.740194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.740257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.740520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.740609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.740948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.741026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.741326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.741407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.741679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.741760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.742073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.742152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.742400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.742473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.742753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.742832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.743036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.743115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.743346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.743417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.743674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.743756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.743998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.744058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.744240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.744300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.744506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.744588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.744856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.744918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.745151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.745212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.745451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.745513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.745787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.745877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.746178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.746239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.746483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.746542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.746807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.278 [2024-12-10 22:58:27.746885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.278 qpair failed and we were unable to recover it. 00:27:20.278 [2024-12-10 22:58:27.747174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.747254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.747495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.747585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.747872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.747934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.748205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.748289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.748537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.748619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.748925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.749004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.749276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.749349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.749596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.749660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.749942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.750003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.750238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.750298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.750524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.750609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.750891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.750968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.751271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.751335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.751598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.751665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.751957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.752035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.752295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.752357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.752625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.752706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.752984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.753066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.753312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.753373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.753667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.753747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.754030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.754113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.754349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.754408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.754626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.754707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.754975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.755051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.755304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.755366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.755572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.755635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.755908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.755986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.756251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.756313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.756640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.756731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.757007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.757068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.757344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.757414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.757814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.757897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.758181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.758260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.758538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.758620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.758913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.758992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.759300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.279 [2024-12-10 22:58:27.759379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.279 qpair failed and we were unable to recover it. 00:27:20.279 [2024-12-10 22:58:27.759651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.759737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.760019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.760099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.760348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.760408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.760682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.760744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.761000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.761081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.761327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.761388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.761627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.761709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.761969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.762046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.762346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.762409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.762687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.762769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.763074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.763152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.763381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.763454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.763736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.763817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.764080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.764157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.764420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.764485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.764815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.764895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.765209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.765288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.765531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.765607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.765898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.765965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.766191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.766270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.766506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.766585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.766850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.766950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.767227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.767305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.767598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.767663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.767928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.768005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.768299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.768380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.768640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.768731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.768951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.769029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.769238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.769316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.769622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.769687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.769960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.770038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.770211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.770270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.280 [2024-12-10 22:58:27.770522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.280 [2024-12-10 22:58:27.770627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.280 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.770959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.771042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.771293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.771371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.771658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.771738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.772057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.772138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.772371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.772431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.772657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.773036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.773128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.773435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.773498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.773829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.774088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.774169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.774393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.774456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.774735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.774816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.775041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.775119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.775397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.775471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.775782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.775864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.776132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.776211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.776401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.776461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.776760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.776842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.777142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.777219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.777514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.777599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.777866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.777946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.778193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.778272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.778508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.778592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.778807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.778867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.779106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.779191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.779422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.779482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.779760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.779823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.780100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.780162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.780450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.780513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.780806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.281 [2024-12-10 22:58:27.780887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.281 qpair failed and we were unable to recover it. 00:27:20.281 [2024-12-10 22:58:27.781122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.781201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.781483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.781574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.781884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.781963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.782240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.782302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.782509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.782590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.782919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.783002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.783301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.783381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.783670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.783750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.784027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.784107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.784329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.784400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.784713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.784792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.785096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.785186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.785480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.785543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.785834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.785911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.786133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.786211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.786460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.786525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.786863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.786940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.787247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.787326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.787595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.787670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.787944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.788004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.788250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.788329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.788641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.788724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.789002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.789081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.789311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.789371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.789662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.789743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.790031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.790120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.790369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.790430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.790644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.790725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.791000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.791061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.791302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.791367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.791577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.791640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.791892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.791969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.282 [2024-12-10 22:58:27.792204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.282 [2024-12-10 22:58:27.792264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.282 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.792596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.792662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.792964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.793044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.793301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.793382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.793677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.793760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.794047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.794125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.794396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.794456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.794686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.794779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.795057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.795120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.795367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.795430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.795692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.795773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.796022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.796116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.796366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.796426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.796660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.796742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.797006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.797083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.797355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.797417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.797693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.797774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.798093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.798181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.798381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.798441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.798730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.798811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.799045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.799122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.799412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.799472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.799770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.799854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.800095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.800177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.800421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.800481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.800790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.800853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.801135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.801215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.801408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.801467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.801782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.801863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.802196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.802261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.802514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.802605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.802934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.803013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.803331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.803399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.803730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.803811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.804027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.804110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.283 qpair failed and we were unable to recover it. 00:27:20.283 [2024-12-10 22:58:27.804341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.283 [2024-12-10 22:58:27.804402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.804661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.804755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.805071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.805149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.805419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.805480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.805743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.805829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.806126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.806205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.806448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.806509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.806794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.806874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.807170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.807263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.807540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.807626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.807904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.807985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.808253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.808337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.808638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.808720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.809020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.809097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.809380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.809454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.809776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.809859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.810169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.810248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.810476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.810537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.810877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.810939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.811198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.811277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.811533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.811611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.811834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.811914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.812207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.812297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.812505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.812584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.812844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.812922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.813204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.813285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.813579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.813645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.813940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.814001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.814221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.814302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.814578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.814644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.814920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.814997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.815280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.815341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.815530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.815608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.815901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.815981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.816229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.816290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.284 qpair failed and we were unable to recover it. 00:27:20.284 [2024-12-10 22:58:27.816463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.284 [2024-12-10 22:58:27.816525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.816883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.816968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.817220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.817297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.817494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.817576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.817834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.817914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.818181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.818263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.818535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.818614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.818939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.819020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.819287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.819369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.819648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.819730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.820044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.820124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.820368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.820429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.820743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.820807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.821083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.821144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.821391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.821453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.821759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.821840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.822127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.822205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.822481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.822541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.822822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.822912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.823191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.823269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.823505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.823580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.823845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.823922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.824245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.824325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.824596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.824660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.824967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.825046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.825271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.825350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.825583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.825645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.825976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.826066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.826319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.826394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.826617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.826681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.826955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.827032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.827273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.827334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.827620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.827699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.827936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.828015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.828282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.828343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.285 [2024-12-10 22:58:27.828653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.285 [2024-12-10 22:58:27.828746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.285 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.829014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.829076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.829272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.829332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.829602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.829665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.829953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.830034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.830309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.830369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.830587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.830650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.830854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.830936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.831212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.831275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.831475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.831534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.831829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.831908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.832168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.832251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.832505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.832587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.832827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.832905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.833174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.833251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.833524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.833619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.833939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.834018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.834325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.834403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.834698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.834779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.835038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.835117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.835376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.835440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.835671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.835751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.836069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.836150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.836388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.836448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.836749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.836831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.837107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.837196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.837446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.837506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.837803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.837882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.838155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.838240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.838465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.838527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.838824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.286 [2024-12-10 22:58:27.838903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.286 qpair failed and we were unable to recover it. 00:27:20.286 [2024-12-10 22:58:27.839200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.839277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.839524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.839620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.839903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.839983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.840282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.840361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.840652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.840737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.841040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.841104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.841324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.841385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.841657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.841720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.841959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.842022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.842279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.842339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.842575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.842638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.842892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.842974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.843210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.843270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.843567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.843630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.843894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.843975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.844305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.844386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.844637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.844737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.845009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.845088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.845359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.845421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.845643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.845706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.845973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.846051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.846297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.846357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.846626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.846707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.846920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.847005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.847238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.847297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.847576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.847654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.847975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.848056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.848369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.848446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.848747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.848827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.849132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.849211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.849448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.849508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.849833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.849894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.850150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.850235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.850566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.850629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.850840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.287 [2024-12-10 22:58:27.850920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.287 qpair failed and we were unable to recover it. 00:27:20.287 [2024-12-10 22:58:27.851221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.851299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.851586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.851649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.851916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.851978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.852267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.852346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.852664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.852745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.852962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.853040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.853343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.853413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.853685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.853765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.854022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.854101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.854341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.854402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.854719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.854800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.855100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.855180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.855403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.855462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.855689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.855768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.856076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.856154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.856431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.856492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.856815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.856895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.857189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.857268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.857593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.857658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.857913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.857990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.858262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.858340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.858657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.858747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.859005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.859083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.859342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.859419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.859657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.859738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.860037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.860130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.860449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.860509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.860830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.860909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.861182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.861266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.861511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.861590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.861866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.861946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.862163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.862246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.862500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.862596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.862831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.862915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.863190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.863269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.288 [2024-12-10 22:58:27.863486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.288 [2024-12-10 22:58:27.863568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.288 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.863866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.863947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.864223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.864283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.864540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.864619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.864895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.864957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.865159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.865219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.865456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.865519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.865821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.865892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.866192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.866268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.866521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.866601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.866863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.866948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.867212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.867293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.867595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.867658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.867965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.868042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.868288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.868373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.868681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.868746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.868988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.869049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.869301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.869378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.869670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.869760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.870019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.870101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.870289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.870350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.870587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.870650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.870872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.870935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.871179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.871239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.871493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.871570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.871861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.871927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.872221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.872301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.872591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.872655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.872952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.873031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.873255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.873318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.873497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.873577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.873888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.873966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.874267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.874344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.874603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.874667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.874932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.875013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.875235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.289 [2024-12-10 22:58:27.875298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.289 qpair failed and we were unable to recover it. 00:27:20.289 [2024-12-10 22:58:27.875533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.875624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.875964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.876043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.877791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.877841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.878008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.878059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.878205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.878252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.878351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.878380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.878475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.878503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.878678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.878729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.878908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.878943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.879083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.879133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.879283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.879311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.879413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.879441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.879562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.879591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.879732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.879779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.879900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.879957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.880108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.880138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.880253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.880282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.880404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.880432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.880530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.880570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.880701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.880731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.880839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.880868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.880988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.881015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.881101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.881129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.881256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.881291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.881396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.881423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.881509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.881537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.881680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.881708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.881858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.881887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.881986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.882014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.882148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.882176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.882269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.882300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.882409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.882438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.882559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.882589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.882720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.882748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.882900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.882930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.290 qpair failed and we were unable to recover it. 00:27:20.290 [2024-12-10 22:58:27.883085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.290 [2024-12-10 22:58:27.883114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.883205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.883233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.883367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.883395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.883507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.883535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.883646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.883675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.883776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.883811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.883955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.883983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.884110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.884143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.884275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.884303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.884448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.884477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.884580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.884609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.884696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.884723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.884828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.884856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.884954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.884981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.885082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.885110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.885206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.885241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.885379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.885409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.885520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.885561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.885663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.885691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.885836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.885865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.885967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.885995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.886096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.886125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.886247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.886276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.886411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.886440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.886559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.886590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.886713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.886741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.886875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.886904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.887003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.887033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.887177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.887205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.887340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.887369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.887486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.887513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-12-10 22:58:27.887678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-12-10 22:58:27.887732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.887878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.887930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.888048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.888075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.888204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.888232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.888323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.888354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.888484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.888514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.888621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.888650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.888736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.888763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.888893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.888923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.889040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.889069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.889155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.889184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.889317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.889345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.889466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.889501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.889638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.889667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.889763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.889791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.889892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.889923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.890036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.890069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.890189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.890217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.890339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.890368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.890463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.890496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.890621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.890649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.890738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.890765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.890871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.890898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.891017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.891045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.891177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.891205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.891327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.891356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.891442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.891475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.891607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.891637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.891746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.891774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.891881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.891909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.892016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.892049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.892170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.892198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.892318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.892345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.892472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.892500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.892617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-12-10 22:58:27.892645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-12-10 22:58:27.892742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.892770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.892879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.892908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.893056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.893083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.893179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.893207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.893300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.893326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.893442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.893474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.893582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.893612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.893738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.893766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.893894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.893922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.894947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.894975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.895082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.895110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.895269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.895298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.895420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.895447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.895561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.895591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.895695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.895727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.895860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.895888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.895980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.896007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.896161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.896189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.896338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.896366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.896458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.896485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.896601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.896636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.896730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.896758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.896887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.896916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.897031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.897062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.897192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.897220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.897341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.897368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.897487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.897520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.897641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.293 [2024-12-10 22:58:27.897670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.293 qpair failed and we were unable to recover it. 00:27:20.293 [2024-12-10 22:58:27.897796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.897823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.897923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.897950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.898100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.898128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.898220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.898248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.898343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.898369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.898469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.898497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.898639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.898668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.898764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.898791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.898918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.899082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.899115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.899216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.899243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.899327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.899354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.899449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.899478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.899603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.899631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.899758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.899786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.899890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.899917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.900037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.900063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.900198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.900226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.900364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.900391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.900514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.900542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.900649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.900678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.900827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.900854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.900978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.901100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.901239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.901521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.901660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.901824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.901946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.901974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.902133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.902160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.902290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.902318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.902439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.902466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.902587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.902615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.294 qpair failed and we were unable to recover it. 00:27:20.294 [2024-12-10 22:58:27.902719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.294 [2024-12-10 22:58:27.902754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.902907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.902934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.903056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.903168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.903327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.903462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.903597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.903718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.903878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.903999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.904118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.904240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.904383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.904505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.904671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.904800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.904951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.904979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.905092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.905119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.905248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.905277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.905404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.905431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.905525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.905574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.905679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.905707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.905872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.905899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.906021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.906049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.906174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.906201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.906354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.906382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.906502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.906530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.906641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.906669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.906783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.906813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.906928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.906955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.907082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.907109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.907195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.907222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.907335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.907369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.907494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.907522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.907652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.295 [2024-12-10 22:58:27.907679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.295 qpair failed and we were unable to recover it. 00:27:20.295 [2024-12-10 22:58:27.907814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.907842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.907990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.908017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.908130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.908157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.908286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.908314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.908434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.908461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.908588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.908618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.908732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.908762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.908902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.908929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.909081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.909213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.909241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.909346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.909374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.909500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.909527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.909648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.909675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.909813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.909843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.909956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.909984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.910105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.910134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.910263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.910295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.910400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.910429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.910525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.910561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.910683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.910710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.910832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.910860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.910957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.910985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.911079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.911106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.911202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.911230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.911362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.911390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.911515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.911559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.911654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.911683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.911779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.911811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.911939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.911967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.912116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.912144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.912263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.296 [2024-12-10 22:58:27.912293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.296 qpair failed and we were unable to recover it. 00:27:20.296 [2024-12-10 22:58:27.912425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.912453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.912559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.912588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.912719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.912772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.912861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.912890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.912981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.913008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.913098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.913127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.913228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.913260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.913364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.913399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.913526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.913584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.913696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.913724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.913858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.913894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.914043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.914071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.914176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.914204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.914303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.914331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.914479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.914509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.914640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.914693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.914856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.914900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.915000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.915032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.915131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.915159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.915249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.915277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.915416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.915445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.915584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.915612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.915711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.915739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.915889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.915916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.916046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.916081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.916210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.916238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.916364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.916391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.916559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.916589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.916684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.916711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.916805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.916832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.916974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.917010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.917102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.917129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.917254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.917283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.917391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.917433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.297 [2024-12-10 22:58:27.917558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.297 [2024-12-10 22:58:27.917589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.297 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.917687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.917716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.917875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.917921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.918189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.918243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.918456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.918503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.918689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.918717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.918812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.918844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.919036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.919081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.919284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.919330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.919513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.919568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.919705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.919767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.920041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.920087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.920247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.920304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.920494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.920539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.920667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.920695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.920862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.920908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.921110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.921156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.921366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.921427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.921641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.921670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.921821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.921849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.921947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.921976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.922090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.922125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.922233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.922267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.922395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.922422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.922542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.922577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.922674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.922703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.922876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.922935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.923134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.923189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.923390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.923436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.923626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.923654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.923776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.923803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.923945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.923972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.924074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.924103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.924195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.924247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.924398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.298 [2024-12-10 22:58:27.924444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.298 qpair failed and we were unable to recover it. 00:27:20.298 [2024-12-10 22:58:27.924574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.924602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.924738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.924766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.924878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.924911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.925035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.925070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.925283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.925317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.925574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.925628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.925733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.925767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.925934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.925981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.926221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.926265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.926407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.926454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.926633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.926663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.926758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.926787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.926935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.926987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.927193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.927239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.927433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.927480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.927641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.927677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.927794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.927827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.928019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.928077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.928286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.928328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.928504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.928557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.928717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.928750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.928902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.928948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.929162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.929207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.929413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.929463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.929674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.929709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.929852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.929888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.930101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.930146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.930306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.930350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.930542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.930621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.930768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.930801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.931011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.931058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.931284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.931330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.931523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.931564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.299 [2024-12-10 22:58:27.931668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.299 [2024-12-10 22:58:27.931705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.299 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.931824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.931859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.932011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.932058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.932241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.932286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.932507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.932562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.932745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.932791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.932978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.933023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.933170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.933223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.933429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.933618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.933658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.933778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.933813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.934050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.934118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.934332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.934381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.934584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.934636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.934792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.934840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.935055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.935102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.935253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.935307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.935513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.935576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.935773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.935817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.935997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.936041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.936276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.936323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.936485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.936529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.936744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.936789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.936969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.937024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.937198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.937255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.937408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.937455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.937650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.937698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.937929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.937983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.938207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.938254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.938441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.938487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.938653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.938713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.938907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.938954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.939174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.939221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.939376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.939432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.939619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.300 [2024-12-10 22:58:27.939667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.300 qpair failed and we were unable to recover it. 00:27:20.300 [2024-12-10 22:58:27.939902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.939947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.940178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.940224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.940399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.940445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.940604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.940652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.940849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.940896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.941081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.941127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.941323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.941372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.941566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.941613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.941750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.941795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.941946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.941992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.942224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.942273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.942452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.942497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.942668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.942716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.942935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.942987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.943146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.943195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.943396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.943449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.943661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.943724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.943984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.944040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.944272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.944342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.944543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.944608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.944785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.944834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.945010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.945057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.945291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.945338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.945536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.945612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.945811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.945857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.946001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.946047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.946228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.946273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.946413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.946460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.946635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.946682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.946882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.946938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.301 qpair failed and we were unable to recover it. 00:27:20.301 [2024-12-10 22:58:27.947121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.301 [2024-12-10 22:58:27.947166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.947339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.947390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.947574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.947631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.947794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.947848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.948021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.948067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.948251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.948296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.948456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.948501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.948670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.948728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.948924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.948972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.949120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.949165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.949381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.949428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.949609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.949655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.949821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.949867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.950042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.950087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.950226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.950272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.950495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.950542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.950715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.950760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.950940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.950992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.951152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.951199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.951426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.951471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.951641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.951688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.951901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.951955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.952196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.952252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.952499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.952533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.952646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.952680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.952813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.952849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.953034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.953087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.953370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.953426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.953598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.953669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.953854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.953905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.954095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.954146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.954303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.954370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.954565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.954613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.954812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.954857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.955039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.302 [2024-12-10 22:58:27.955092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.302 qpair failed and we were unable to recover it. 00:27:20.302 [2024-12-10 22:58:27.955293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.955339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.955526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.955583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.955759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.955804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.955984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.956031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.956228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.956289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.956475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.956509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.956761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.956807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.956984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.957030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.957197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.957244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.957484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.957532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.957719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.957774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.957963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.958022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.958278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.958328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.958490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.958538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.958781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.958834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.959059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.959111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.959350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.959403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.959606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.959655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.959840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.959891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.960116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.960165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.960455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.960505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.960708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.960759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.960951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.960999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.961160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.961209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.961406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.961455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.961652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.961703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.961909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.961957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.962155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.962205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.962410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.962458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.962644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.962693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.962832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.962880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.963040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.963088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.963316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.963376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.963538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.963596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.303 [2024-12-10 22:58:27.963762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.303 [2024-12-10 22:58:27.963812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.303 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.964011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.964059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.964288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.964337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.964514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.964597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.964811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.964859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.965057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.965116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.965326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.965375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.965577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.965628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.965812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.965861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.966057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.966106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.966303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.966353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.966565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.966614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.966827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.966883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.967088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.967137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.967370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.967419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.967638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.967687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.967881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.967930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.968124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.968172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.968403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.968452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.968625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.968685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.968862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.968911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.969107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.969155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.969310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.969360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.969518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.969578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.969787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.969835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.970032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.970081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.970297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.970345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.970508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.970578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.970763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.970813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.971005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.971052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.971265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.971314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.971542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.971600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.971797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.971844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.972000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.972050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.972252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.304 [2024-12-10 22:58:27.972300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.304 qpair failed and we were unable to recover it. 00:27:20.304 [2024-12-10 22:58:27.972510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.972557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.972711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.972744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.972879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.972943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.973110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.973159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.973385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.973433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.973638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.973687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.973847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.973894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.974135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.974170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.974316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.974350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.974510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.974573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.974790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.974838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.975036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.975084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.975265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.975312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.975463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.975510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.975690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.975751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.975964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.976014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.976180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.976231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.976433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.976483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.976683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.976733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.976952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.977001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.977205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.977253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.977438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.977485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.977711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.977762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.977964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.978014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.978211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.978259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.978446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.978494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.978732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.978781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.978980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.979029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.979229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.979276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.979504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.979572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.979770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.979818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.980033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.980084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.980304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.980352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.980597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.980647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.305 [2024-12-10 22:58:27.980853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.305 [2024-12-10 22:58:27.980900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.305 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.981148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.981203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.981371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.981419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.981617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.981665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.981838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.981888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.982062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.982111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.982303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.982374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.982596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.982645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.982811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.982869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.983118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.983168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.983363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.983412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.983618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.983669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.983833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.983881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.984115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.984164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.306 [2024-12-10 22:58:27.984322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.306 [2024-12-10 22:58:27.984370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.306 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.984574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.984625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.984831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.985118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.985168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.985325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.985372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.985592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.985640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.985837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.985885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.986097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.986146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.986359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.986414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.986611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.986684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.986917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.986973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.987138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.987186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.987383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.987430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.987569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.987620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.987818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.987865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.988051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.988103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.988216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.988258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.988445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.988494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.988689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.988739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.988971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.989020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.989185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.989232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.989416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.989464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.989691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.989725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.989871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.989912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.990145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.990193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.990396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.990449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.990624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.990706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.990996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.991061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.991255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.991318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.991566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.991615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.991793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.991843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-12-10 22:58:27.992039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-12-10 22:58:27.992087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.992270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.992320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.992493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.992565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.992808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.992877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.993063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.993137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.993403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.993458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.993674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.993739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.993921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.993995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.994215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.994265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.994423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.994471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.994633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.994682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.994842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.994890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.995072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.995121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.995309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.995357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.995564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.995617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.995791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.995841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.996048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.996098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.996311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.996361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.996557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.996607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.996787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.996834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.997034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.997341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.997524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.997601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.997750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.997797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.997999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.998049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.998233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.998281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.998432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.998466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.998574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.998608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.998750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.998783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.998942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.998990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.999198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.999248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.999443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.999493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.999712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.999763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:27.999951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:27.999999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-12-10 22:58:28.000231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-12-10 22:58:28.000279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.000442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.000490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.000719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.000770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.000976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.001042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.001295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.001502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.001559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.001786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.001853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.002159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.002223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.002438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.002492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.002747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.002820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.003098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.003155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.003394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.003450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.003696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.003754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.003953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.004009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.004246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.004301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.004519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.004595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.004786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.004839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.005002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.005051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.005233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.005281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.005508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.005569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.005753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.005804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.006036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.006087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.006295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.006347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.006557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.006618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.006802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.006854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.007013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.007066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.007285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.007337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.007559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.007612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.007851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.007901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.008118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.008169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.008416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.008471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.008689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.008742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.008950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.009003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.009187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.009237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-12-10 22:58:28.009444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-12-10 22:58:28.009497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.009723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.009776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.009998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.010051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.010222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.010276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.010473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.010525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.010781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.010838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.011013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.011067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.011271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.011321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.011583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.011635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.011800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.011853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.012009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.012059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.012256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.012309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.012560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.012613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.012788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.012840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.012995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.013048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.013206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.013266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.013515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.013579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.013765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.013816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.014037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.014098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.014281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.014333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.014579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.014634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.014816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.014866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.015039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.015091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.015294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.015345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.015581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.015633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.015871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.015924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.016094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.016144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.016317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.016373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.016537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.016585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.016726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.016759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.016865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.016898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.017003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.017037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.017153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.017186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.017292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.017325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.017436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.017470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.017661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-12-10 22:58:28.017716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-12-10 22:58:28.017962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.018014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.018184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.018236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.018435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.018487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.018686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.018720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.018843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.018876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.019069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.019103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.019236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.019274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.019463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.019522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.019765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.019818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.020018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.020101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.020363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.020413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.020608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.020660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.020877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.020929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.021145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.021199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.021428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.021480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.021686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.021740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.021958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.022010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.022246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.022298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.022566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.022620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.022817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.022887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.023138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.023200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.023452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.023519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.023770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.023847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.024109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.024168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.024364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.024416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.024597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.024649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.024832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.024883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.025088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.025150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.025377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.025429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.025636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.025691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.025888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.025939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.026105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.026156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.026368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.026420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.026676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.026727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-12-10 22:58:28.026973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-12-10 22:58:28.027014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.027161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.027196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.027333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.027388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.027590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.027630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.027757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.027790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.027941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.027992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.028218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.028270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.028472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.028524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.028780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.028832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.029049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.029102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.029316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.029367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.029579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.029631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.029809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.029862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.030068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.030120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.030340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.030403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.030608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.030661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.030908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.030960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.031127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.031178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.031434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.031491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.031721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.031772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.031965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.032016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.032223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.032274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.032532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.032618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.032830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.032884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.033080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.033132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.033342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.033413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.033630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.033684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.033925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.033976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.034210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.034262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.034434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.034485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.034716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-12-10 22:58:28.034768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-12-10 22:58:28.034972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.035023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.035252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.035306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.035505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.035572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.035831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.035882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.036091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.036142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.036301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.036353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.036592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.036645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.036865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.036930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.037155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.037208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.037462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.037523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.037717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.037767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.037987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.038041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.038256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.038307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.038532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.038630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.038832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.038883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.039088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.039139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.039379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.039430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.039596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.039652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.039881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.039935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.040201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.040252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.040421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.040472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.040682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.040735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.040897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.040948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.041170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.041224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.041391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.041444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.041610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.041674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.041857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.041911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.042103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.042164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.042388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.042439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.042611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.042663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.042870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.042920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.043126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.043178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.043361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.043413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.043581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.043638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.043878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.043938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.044156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.044209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-12-10 22:58:28.044422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-12-10 22:58:28.044474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.044702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.044755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.044952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.045004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.045208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.045259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.045500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.045574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.045725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.045776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.045968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.046021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.046235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.046287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.046493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.046560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.046798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.046849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.047009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.047060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.047234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.047289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.047525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.047592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.047787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.047840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.048077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.048128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.048377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.048429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.048600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.048652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.048829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.048880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.049085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.049145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.049374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.049426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.049607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.049665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.049881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.049932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.050128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.050183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.050398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.050455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.050694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.050750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.050944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.051000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.051188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.051246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.051490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.051558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.051796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.052074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.052128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.052392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.052448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.052741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.052820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.053020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.053072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.053309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.053375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.053595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.053649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-12-10 22:58:28.053900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-12-10 22:58:28.053961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.054164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.054214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.054484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.054540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.054732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.054796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.054976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.055031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.055226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.055281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.055559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.055618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.055785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.055840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.056057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.056113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.056324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.056379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.056626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.056682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.056898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.056953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.057186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.057241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.057494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.057574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.057772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.057828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.058050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.058085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.058230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.058264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.058473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.058528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.058775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.058830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.059018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.059074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.059310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.059377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.059584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.059639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.059854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.059911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.060139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.060194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.060407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.060466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.060752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.060809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.060997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.061051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.061313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.061368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.061609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.061667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.061896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.061960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.062168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.062224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.062481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.062777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.062831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.063050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.063106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.063313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.063387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.063646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-12-10 22:58:28.063703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-12-10 22:58:28.063905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.063963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.064187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.064243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.064458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.064514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.064726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.064781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.065044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.065099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.065257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.065314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.065568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.065626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.065841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.065910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.066194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.066251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.066471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.066526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.066717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.066773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.066979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.067036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.067291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.067350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.067572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.067628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.067849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.067905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.068117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.068173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.068437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.068494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.068719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.068777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.068992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.069048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.069238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.069293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.069518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.069587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.069818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.069873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.070055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.070112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.070374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.070432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.070723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.070781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.071047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.071102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.071351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.071411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.071651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.071708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.071933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.071988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.072234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.072303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.072577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.072612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-12-10 22:58:28.072754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-12-10 22:58:28.072791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.072979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.073035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.073230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.073286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.073481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.073538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.073760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.073815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.074047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.074102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.074277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.074344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.074556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.074612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.074788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.074844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.075072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.075129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.075380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.075437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.075649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.075705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.075938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.075998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.076275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.076337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.076617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.076675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.076864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.076930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.077167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.077238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.077542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.077617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.077844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.077923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.078171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.078225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.078413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.078467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.078703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.078768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.079025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.079081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.079350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.079407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.079665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.079722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.079906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.079961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.080174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.080231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.080474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.080508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.080662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.080697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.080921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.080979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.081215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.081272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.081495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.081562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.081801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.081856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-12-10 22:58:28.082034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-12-10 22:58:28.082091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.082314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.082370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.082580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.082651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.082840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.082895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.083123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.083185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.083415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.083470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.083651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.083707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.083886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.083941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.084204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.084259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.084441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.084495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.084774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.084832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.084999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.085054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.085327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.085384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.085603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.085659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.085839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.085894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.086077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.086134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.086407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.086462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.086685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.086746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.087020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.087076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.087302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.087359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.087606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.087664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.087879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.087933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.088155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.088210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.088500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.088584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.088842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.088903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.089144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.089203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.089456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.089516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.089786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.089845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.090045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.090119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.090397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.090459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.090717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.090751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.090863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.090897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.091106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.091167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.091399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.091459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.091753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-12-10 22:58:28.091814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-12-10 22:58:28.092070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.092132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.092362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.092421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.092638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.092698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.092935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.092996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.093223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.093283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.093572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.093634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.093907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.093969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.094271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.094332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.094574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.094636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.094872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.094931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.095185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.095243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.095479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.095537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.095755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.095818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.096067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.096106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.096254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.096289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.096514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.096587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.096833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.096893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.097144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.097207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.097459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.097519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.097743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.097804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.098042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.098101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.098340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.098402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.098631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.098692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.098899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.098961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.099219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.099278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.099515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.099590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.099840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.099874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.100042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.100076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.100299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.100357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.100602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.100663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.100933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.100994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.101219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.101280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.101466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.101525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.101787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.101846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-12-10 22:58:28.102081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-12-10 22:58:28.102139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.102356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.102414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.102666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.102726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.102949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.102983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.103183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.103245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.103486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.103520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.103641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.103676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.103824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.103884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.104118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.104176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.104422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.104480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.104713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.104749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.104891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.104925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.105137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.105196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.105536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.105846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.105924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.106190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.106265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.106474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.106532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.106766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.106825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.107046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.107106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.107309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.107369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.107606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.107666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.107935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.108002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.108280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.108339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.108622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.108682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.108913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.108974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.109247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.109307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.109527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.109606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.109856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.110191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.110489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.110560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.110787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.110846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.111094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.111153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.111367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.111425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.111682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.111760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.112054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-12-10 22:58:28.112132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-12-10 22:58:28.112379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.112438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.112781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.112817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.113068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.113146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.113424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.113482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.113781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.113859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.114134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.114211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.114451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.114510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.114781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.114857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.115150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.115228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.115434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.115492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.115797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.115881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.116145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.116222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.116463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.116524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.116852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.116909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.117135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.117190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.117413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.117467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.117726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.117801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.118073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.118145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.118358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.118414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.118648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.118723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.118897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.118954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.119204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.119259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.119512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.119578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.119796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.119851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.120133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.120206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.120431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.120485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.120739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.120827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.121086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.121158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.121393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.121450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.121668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.121746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.122009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.122064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.122282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-12-10 22:58:28.122336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-12-10 22:58:28.122508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.122577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.122746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.122802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.123044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.123116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.123378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.123432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.123694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.123767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.124009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.124081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.124303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.124357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.124531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.124600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.124860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.124934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.125213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.125285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.125506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.125571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.125858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.125930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.126214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.126287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.126542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.126608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.126834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.126906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.127182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.127254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.127480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.127537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.127804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.127877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.128067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.128138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.128388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.128441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.128749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.128823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.129124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.129196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.129459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.129513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.129818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.129874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.130098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.130170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.130392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.130445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.130751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.130825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.131123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-12-10 22:58:28.131195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-12-10 22:58:28.131465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.131520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.131831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.131903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.132106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.132184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.132434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.132488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.132766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.132842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.133147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.133219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.133417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.133480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.133759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.133831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.134147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.134220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.134476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.134530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.134755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.134829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.135127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.135201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.135417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.135472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.135745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.135823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.136025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.136097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.136303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.136359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.136598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.136656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.136881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.136936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.137222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.137294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.137517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.137583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.137879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.137954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.138213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.138270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.138555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.138612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.138833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.138909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.139135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.139189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.139407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.139461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.139699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.139755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.139935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.139988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.140212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.140267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.140481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.140561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.140793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.140870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.141132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.141189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.141395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.141450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.141729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.141785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-12-10 22:58:28.142035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-12-10 22:58:28.142089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.142316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.142371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.142572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.142629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.142811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.142877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.143094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.143167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.143429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.143485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.143683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.143740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.143986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.144064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.144286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.144343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.144599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.144655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.144820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.144876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.145079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.145150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.145404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.145468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.145741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.145816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.146083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.146157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.146417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.146471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.146701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.146778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.147048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.147121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.147378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.147435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.147666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.147743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.148001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.148074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.148288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.148341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.148589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.148646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.148856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.148930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.149145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.149200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.149420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.149477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.149765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.149840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.150071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.150144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.150399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.150454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.150718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.150791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.151047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.151119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.151375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.151429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.151683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.151760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.152046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.152120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.152349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.152404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-12-10 22:58:28.152569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-12-10 22:58:28.152627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.152842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.152918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.153216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.153289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.153528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.153595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.153848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.153933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.154230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.154303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.154496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.154563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.154814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.154888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.155111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.155184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.155356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.155411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.155650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.155724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.156035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.156091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.156328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.156386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.156660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.156723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.156949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.157006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.157243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.157300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.157488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.157560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.157734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.157799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.158046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.158101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.158368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.158433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.158692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.158749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.159010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.159066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.159319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.159374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.159657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.159730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.159929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.160003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.160192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.160249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.160457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.160523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.160815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.161040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.161097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.161313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.161366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.161584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.161640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.161823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.161881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.162092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.162148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.162326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.162382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.162580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-12-10 22:58:28.162642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-12-10 22:58:28.162843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.162900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.163090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.163401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.163455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.163665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.163739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.163994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.164067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.164282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.164338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.164609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.164670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.164962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.165018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.165209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.165266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.165445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.165501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.165758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.165834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.166034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.166088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.166324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.166379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.166632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.166709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.166935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.167018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.167243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.167298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.167570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.167628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.167848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.167903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.168134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.168189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.168448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.168503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.168727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.168801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.169036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.169114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.169334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.169398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.169683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.169759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.170004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.170078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.170292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.170346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.170542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.170632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.170834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.170908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.171136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.171221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.171457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.171514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.171754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.171830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.172051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.172122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.172352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.172408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.172681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.172756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.172964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.173036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-12-10 22:58:28.173262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-12-10 22:58:28.173316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.173565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.173629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.173895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.173970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.174207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.174264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.174488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.174560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.174790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.174845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.175069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.175123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.175353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.175409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.175645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.175720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.176054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.176128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.176355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.176411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.176667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.176742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.176997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.177308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.177365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.177578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.177635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.177825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.177901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.178118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.178193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.178408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.178464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.178676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.178733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.178980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.179035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.179260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.179315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.179536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.179607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.179865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.179920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.180137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.180210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.180453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.180510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.180808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.180883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.181109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.181182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.181397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.181466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.181727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.181784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.182028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.182102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.182271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.182327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-12-10 22:58:28.182516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-12-10 22:58:28.182601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.182857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.182932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.183193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.183268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.183485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.183539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.183863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.183936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.184188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.184261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.184519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.184591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.184862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.184975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.185183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.185255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.185489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.185561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.185836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.185912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.186211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.186284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.186469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.186523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.186776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.186832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.187054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.187123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.187336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.187391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.187591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.187651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.187941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.188014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.188179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.188235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.188422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.188476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.188745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.188828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.189091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.189165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.189443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.189500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.189806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.189882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.190147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.190202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.190396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.190450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.190701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.190775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.190978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.191050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.191241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.191295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.191464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.191518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.191816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.191874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.192092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.192149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.192382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.192436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.192651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.192707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.192904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-12-10 22:58:28.192980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-12-10 22:58:28.193237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.193291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.193499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.193575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.193790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.193872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.194125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.194203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.194432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.194489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.194748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.194823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.195092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.195149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.195373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.195429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.195684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.195762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.195986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.196066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.196308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.196363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.196569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.196627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.196916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.196988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.197208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.197263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.197432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.197489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.197746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.197821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.198070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.198126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.198349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.198406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.198660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.198744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.198946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.199003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.199216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.199269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.199492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.199569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.199808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.199862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.200117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.200196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.200398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.200467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.200719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.200797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.201041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.201118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.201312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.201367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.201588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.201646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.201889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.201962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.202182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.202236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.202405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.202462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.202768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.202844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.203113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.203177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-12-10 22:58:28.203438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-12-10 22:58:28.203493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.203704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.203763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.203979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.204053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.204257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.204310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.204481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.204535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.204800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.204884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.205132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.205204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.205436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.205501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.205736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.205808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.206069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.206145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.206414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.206470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.206762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.206837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.207101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.207173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.207399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.207455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.207750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.207825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.208065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.208140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.208308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.208367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.208651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.208736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.208975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.209047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.209272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.209327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.209504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.209571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.209810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.209864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.210061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.210115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.210300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.210354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.210620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.210679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.210867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.210922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.211080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.211136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.211311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.211368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.211626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.211682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.211955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.212012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.212246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.212301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.212574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.212633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.212888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.212943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.213242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.213317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-12-10 22:58:28.213516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-12-10 22:58:28.213597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.213825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.213879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.214071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.214143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.214377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.214434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.214666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.214751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.214937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.214992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.215213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.215269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.215458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.215514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.215761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.215816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.216035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.216090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.216306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.216360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.216604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.216661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.216862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.216926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.217138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.217201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.217469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.217526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.217774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.217829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.218053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.218109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.218369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.218423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.218657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.218731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.218992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.219071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.219293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.219355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.219579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.219635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.219870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.219942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.220201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.220256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.220508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.220574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.220815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.220889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.221128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.221204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.221401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.221457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.221776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.221852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.222102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.222174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.222393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.222447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.222737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.222814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.223023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.223096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.223297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.223351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.223561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-12-10 22:58:28.223625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-12-10 22:58:28.223836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.223913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.224166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.224241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.224425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.224480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.224683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.224758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.224986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.225043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.225281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.225337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.225582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.225653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.225867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.225941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.226246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.226321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.226573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.226629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.226852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.226927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.227178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.227251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.227485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.227541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.227808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.227886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.228150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.228226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.228481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.228538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.228780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.228854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.229048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.229103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.229355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.229648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.229705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.229900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.229964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.230150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.230205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.230470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.230526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.230788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.230843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.231113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.231169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.231354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.231411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.231658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.231732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.231967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.232043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.232319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.232378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.232624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.232711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-12-10 22:58:28.232925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-12-10 22:58:28.233005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.233191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.233248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.233524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.233604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.233867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.233943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.234170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.234225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.234444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.234510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.234800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.234857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.235081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.235137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.235392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.235446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.235728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.235804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.236052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.236126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.236338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.236393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.236581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.236639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.236891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.236966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.237189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.237275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.237511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.237581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.237798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.237882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.238128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.238201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.238470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.238527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.238764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.238837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.239078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.239150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.239409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.239464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.239732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.239789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.240074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.240146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.240367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.240422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.240665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.240740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.240965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.241021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.241273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.241345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.241609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.241698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.241988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.242061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.242332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.242667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.242740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.242944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.243018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.243237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.243291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.243445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.243499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-12-10 22:58:28.243807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-12-10 22:58:28.243883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.244094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.244165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.244386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.244440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.244738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.244812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.245058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.245132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.245357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.245411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.245651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.245727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.246039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.246112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.246371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.246426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.246664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.246739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.247029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.247101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.247323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.247378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.247651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.247726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.248018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.248089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.248318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.248372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.248658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.248734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.248917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.248971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.249218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.249273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.249536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.249602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.249896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.249968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.250234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.250308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.250531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.250609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.250834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.250888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.251092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.251147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.251400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.251457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.251731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.251787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.252020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.252074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.252312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.252369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.252631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.252687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.252941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.252995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.253260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.253314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.253594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.253673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.253909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.253983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.254275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.254358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.608 [2024-12-10 22:58:28.254632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.608 [2024-12-10 22:58:28.254706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.608 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.255008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.255082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.255325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.255397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.255677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.255751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.255998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.256073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.256302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.256356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.256644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.256719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.256952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.257007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.257256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.257310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.257622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.257698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.257943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.257998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.258250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.258322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.258557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.258623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.258869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.258943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.259242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.259314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.259581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.259637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.259928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.260003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.260295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.260368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.260587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.260661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.260953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.261029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.261326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.261399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.261709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.261784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.262083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.262154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.262381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.262435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.262691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.262765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.263023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.263097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.263352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.263415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.263649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.263724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.263949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.264020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.264282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.264337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.264615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.264690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.264952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.265025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.265294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.265369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.265666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.265741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.265998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.609 [2024-12-10 22:58:28.266072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.609 qpair failed and we were unable to recover it. 00:27:20.609 [2024-12-10 22:58:28.266325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.266397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.266643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.266718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.267002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.267075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.267300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.267355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.267578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.267635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.267891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.267964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.268266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.268322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.268605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.268683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.268946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.269019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.269313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.269384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.269685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.269759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.270044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.270116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.270308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.270363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.270636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.270709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.270971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.271045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.271332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.271404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.271702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.271775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.272080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.272152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.272429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.272485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.272753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.272827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.273058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.273132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.273361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.273417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.273697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.273771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.274066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.274140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.274393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.274448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.274694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.274768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.275018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.275091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.275317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.275372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.275585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.275641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.275870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.275943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.276227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.276302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.276524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.276605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.610 [2024-12-10 22:58:28.276853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.610 [2024-12-10 22:58:28.276929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.610 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.277206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.277279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.277637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.277905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.277960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.278248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.278322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.278584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.278642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.278955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.279027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.279326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.279400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.279593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.279649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.279863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.279936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.280231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.280305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.280588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.280646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.280840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.280918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.281185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.281260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.281533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.281599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.281909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.281982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.282184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.282258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.282476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.282532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.282816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.282889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.283174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.283248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.283507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.283573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.283879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.283953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.284254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.284328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.284543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.284613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.284822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.284887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.285154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.285209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.285484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.285539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.285835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.285890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.286098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.611 [2024-12-10 22:58:28.286169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.611 qpair failed and we were unable to recover it. 00:27:20.611 [2024-12-10 22:58:28.286418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.286491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.286804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.286888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.287180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.287252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.287509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.287577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.287861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.287934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.288202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.288257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.288512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.288581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.288826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.288899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.289179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.289251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.289513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.289582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.289878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.289966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.290273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.290345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.290521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.290608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.290920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.290996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.291253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.291325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.291584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.291639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.291884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.291956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.292215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.292290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.292555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.292611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.292806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.292889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.293106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.293179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.293448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.293503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.293801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.293880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.294126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.294199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.294430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.294485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.612 [2024-12-10 22:58:28.294760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.612 [2024-12-10 22:58:28.294834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.612 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.295127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.295200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.295424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.295490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.295711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.295786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.296047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.296120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.296331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.296386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.296565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.296623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.296907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.296979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.297218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.297292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.297563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.297619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.297915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.297987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.298275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.298347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.298579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.298637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.298832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.298908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.299207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.299280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.299484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.299541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.299740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.299795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.300079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.300151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.300416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.300471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.300731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.300806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.301016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.301088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.879 qpair failed and we were unable to recover it. 00:27:20.879 [2024-12-10 22:58:28.301258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.879 [2024-12-10 22:58:28.301312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.301505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.301588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.301892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.301965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.302260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.302333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.302592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.302656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.302875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.302931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.303207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.303279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.303570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.303626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.303886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.303959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.304191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.304264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.304490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.304565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.304857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.304931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.305236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.305309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.305508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.305574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.305840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.305912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.306207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.306281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.306532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.306597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.306889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.306962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.307194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.307266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.307521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.307588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.307827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.307900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.308186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.308258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.308482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.308539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.308843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.308927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.309171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.309243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.309440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.309493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.309763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.309838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.310102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.310175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.310383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.310437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.310733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.310807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.311063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.311134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.311412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.311467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.311731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.311805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.312018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.312074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.880 [2024-12-10 22:58:28.312344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.880 [2024-12-10 22:58:28.312399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.880 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.312697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.312773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.313004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.313083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.313278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.313334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.313590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.313646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.313930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.314002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.314290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.314366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.314606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.314683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.314930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.315003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.315263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.315319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.315521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.315626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.315895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.315967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.316249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.316322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.316592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.316649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.316856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.316931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.317219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.317275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.317453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.317507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.317841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.317918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.318124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.318201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.318378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.318433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.318682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.318756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.318950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.319005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.319263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.319318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.319601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.319679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.319908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.319981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.320229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.320304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.320566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.320623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.320925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.320999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.321259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.321332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.321579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.321636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.321925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.321997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.322259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.322333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.322581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.322638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.322886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.322959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.323191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.323265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.323437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.881 [2024-12-10 22:58:28.323493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.881 qpair failed and we were unable to recover it. 00:27:20.881 [2024-12-10 22:58:28.323752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.323825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.324136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.324210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.324432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.324487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.324770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.324850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.325084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.325156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.325415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.325472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.325706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.325781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.326083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.326156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.326376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.326431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.326672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.326746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.327024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.327079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.327325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.327381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.327630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.327705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.328000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.328075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.328334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.328398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.328578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.328636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.328927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.328998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.329176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.329232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.329482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.329537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.329740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.329795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.330017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.330074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.330337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.330392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.330668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.330742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.330931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.331004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.331227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.331281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.331483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.331538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.331840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.331913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.332081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.332137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.332381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.332435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.332673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.332748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.333003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.333076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.333282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.333336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.333608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.333687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.333950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.334026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.882 [2024-12-10 22:58:28.334306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.882 [2024-12-10 22:58:28.334360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.882 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.334605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.334683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.334891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.334969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.335266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.335339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.335588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.335644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.335886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.335960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.336247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.336320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.336563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.336620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.336887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.336961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.337203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.337275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.337499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.337571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.337826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.337902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.338195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.338267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.338520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.338589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.338842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.338933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.339170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.339243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.339418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.339473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.339738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.339794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.340050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.340123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.340381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.340436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.340723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.340808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.341048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.341122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.341302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.341356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.341648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.341723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.342026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.342098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.342277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.342332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.342530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.342598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.342863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.342918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.343178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.343232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.343483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.343536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.343805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.343877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.344155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.344229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.344453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.344510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.344712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.344791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.345038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.345375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.883 [2024-12-10 22:58:28.345429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.883 qpair failed and we were unable to recover it. 00:27:20.883 [2024-12-10 22:58:28.345685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.345759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.345949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.346028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.346277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.346348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.346622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.346699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.346904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.346979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.347216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.347289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.347499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.347565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.347790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.347846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.348093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.348147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.348362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.348419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.348604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.348661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.348971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.349046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.349248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.349303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.349585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.349641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.349872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.349945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.350172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.350230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.350485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.350539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.350765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.350837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.351129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.351202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.351436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.351493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.351797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.351871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.352102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.352177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.352425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.352481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.352762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.352840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.353066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.353130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.353381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.353434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.353668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.353742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.354024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.354096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.354273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.354327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.884 [2024-12-10 22:58:28.354533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.884 [2024-12-10 22:58:28.354599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.884 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.354848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.354919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.355220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.355293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.355473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.355527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.355742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.355796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.356007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.356061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.356342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.356396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.356638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.356713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.356923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.356997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.357291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.357365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.357591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.357648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.357857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.357930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.358122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.358198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.358383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.358436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.358674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.358750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.358988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.359059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.359312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.359366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.359575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.359630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.359881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.359953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.360164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.360219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.360443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.360498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.360737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.360811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.361077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.361152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.361365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.361421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.361674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.361731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.362020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.362093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.362326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.362380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.362657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.362730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.362949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.363020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.363255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.363308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.363557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.363615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.363859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.363914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.364171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.364224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.364434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.364490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.364731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.364788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.885 qpair failed and we were unable to recover it. 00:27:20.885 [2024-12-10 22:58:28.365014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.885 [2024-12-10 22:58:28.365079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.365292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.365346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.365600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.365657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.365984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.366057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.366284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.366339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.366607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.366686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.366972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.367043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.367237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.367313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.367481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.367537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.367838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.367914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.368155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.368227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.368397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.368452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.368755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.368830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.369090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.369163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.369437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.369492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.369779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.369858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.370065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.370138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.370332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.370386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.370608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.370665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.370933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.370989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.371216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.371272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.371485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.371541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.371801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.371874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.372092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.372147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.372399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.372455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.372702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.372758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.373023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.373076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.373340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.373396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.373582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.373639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.373928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.374001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.374230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.374285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.374555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.374610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.374929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.886 [2024-12-10 22:58:28.375013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.886 qpair failed and we were unable to recover it. 00:27:20.886 [2024-12-10 22:58:28.375311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.375384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.375650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.375705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.375912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.375968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.376221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.376276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.376487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.376540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.376860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.376915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.377196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.377251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.377501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.377584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.377840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.377913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.378167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.378240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.378465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.378520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.378815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.378888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.379116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.379188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.379440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.379495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.379742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.379815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.380109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.380180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.380364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.380417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.380653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.380729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.380997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.381072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.381260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.381317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.381568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.381623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.381866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.381922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.382121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.382179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.382397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.382450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.382686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.382742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.383032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.383107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.383323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.383377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.383606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.383662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.383880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.383934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.384194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.384247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.384513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.384595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.384815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.384869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.385155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.385227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.385393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.385449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.887 [2024-12-10 22:58:28.385762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.887 [2024-12-10 22:58:28.385848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.887 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.386149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.386221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.386460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.386515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.386721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.386793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.387071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.387126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.387302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.387357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.387577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.387635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.387906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.387960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.388245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.388330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.388612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.388669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.388861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.388941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.389152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.389206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.389470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.389526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.389765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.389848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.390030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.390087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.390309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.390363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.390631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.390710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.390983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.391059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.391312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.391366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.391579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.391636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.391932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.392004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.392256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.392311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.392534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.392631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.392905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.392979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.393255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.393311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.393518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.393610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.393809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.393883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.394119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.394192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.394419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.394477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.394756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.394831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.395035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.395118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.395337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.395391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.395672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.395749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.395926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.395981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.888 qpair failed and we were unable to recover it. 00:27:20.888 [2024-12-10 22:58:28.396190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.888 [2024-12-10 22:58:28.396245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.396469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.396525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.396797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.396871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.397057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.397111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.397333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.397395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.397650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.397726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.397993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.398067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.398291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.398348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.398600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.398656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.398888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.398943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.399208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.399265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.399484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.399558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.399830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.399887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.400116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.400173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.400435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.400491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.400731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.400787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.401006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.401061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.401255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.401308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.401528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.401594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.401793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.401863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.402059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.402114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.402292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.402348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.402574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.402631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.402900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.402956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.403165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.403220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.403379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.403749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.403834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.404082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.404156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.404451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.404505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.404741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.889 [2024-12-10 22:58:28.404815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.889 qpair failed and we were unable to recover it. 00:27:20.889 [2024-12-10 22:58:28.405118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.405175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.405438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.405492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.405721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.405796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.406049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.406121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.406301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.406358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.406620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.406698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.406905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.406995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.407217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.407271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.407482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.407540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.407777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.407848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.408137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.408211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.408478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.408532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.408775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.408831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.409048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.409107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.409325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.409379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.409596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.409654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.409920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.409995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.410226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.410282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.410495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.410562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.410840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.410915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.411170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.411247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.411455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.411516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.411784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.411859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.412155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.412452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.412508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.412668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.412723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.412945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.413016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.413220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.413275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.413568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.413626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.413933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.414018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.414263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.414339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.414612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.414679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.414984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.415058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.415261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.415346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.415577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.890 [2024-12-10 22:58:28.415663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.890 qpair failed and we were unable to recover it. 00:27:20.890 [2024-12-10 22:58:28.415966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.416042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.416271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.416326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.416600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.416678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.416970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.417042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.417294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.417365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.417587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.417643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.417895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.417971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.418171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.418245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.418442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.418499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.418765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.418840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.419083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.419140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.419408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.419463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.419748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.419823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.420032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.420120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.420353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.420406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.420606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.420665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.420924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.420997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.421288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.421362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.421637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.421712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.421943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.422016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.422272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.422327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.422580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.422648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.422847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.422925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.423204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.423258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.423473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.423530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.423775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.423849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.424145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.424219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.424430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.424485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.424783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.424841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.425124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.425180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.425412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.425469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.425644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.425700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.425956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.426010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.426231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.426288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.891 [2024-12-10 22:58:28.426510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.891 [2024-12-10 22:58:28.426606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.891 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.426844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.426918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.427200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.427273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.427461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.427516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.427783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.427856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.428048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.428101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.428311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.428367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.428538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.428607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.428797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.428857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.429049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.429106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.429344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.429400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.429682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.429757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.430005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.430079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.430273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.430328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.430573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.430630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.430814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.430871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.431094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.431151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.431407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.431462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.431697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.431754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.431951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.432006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.432232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.432287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.432563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.432619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.432872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.432945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.433249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.433327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.433589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.433648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.433863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.433938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.434178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.434233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.434461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.434525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.434761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.434817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.435038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.892 [2024-12-10 22:58:28.435091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.892 qpair failed and we were unable to recover it. 00:27:20.892 [2024-12-10 22:58:28.435296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.435364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.435600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.435658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.435857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.435933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.436227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.436303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.436514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.436589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.436805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.436885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.437144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.437199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.437374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.437443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.437664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.437723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.437966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.438023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.438294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.438348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.438626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.438683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.438948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.439003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.439213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.439268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.439454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.439515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.439741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.439798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.440019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.440075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.440337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.440394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.440629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.440686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.440905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.440959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.441169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.441223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.441408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.441466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.441729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.441786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.442013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.442075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.442339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.442395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.442631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.442706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.442962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.443035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.443270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.443326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.443564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.443621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.443853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.443934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.444116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.444172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.444356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.444412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.444653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.444728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.445019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.893 [2024-12-10 22:58:28.445074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.893 qpair failed and we were unable to recover it. 00:27:20.893 [2024-12-10 22:58:28.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.445371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.445571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.445627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.445823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.445912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.446194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.446259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.446481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.446560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.446776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.446850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.447083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.447138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.447330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.447396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.447649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.447722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.447904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.447958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.448168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.448232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.448455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.448510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.448743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.448801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.449019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.449074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.449333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.449387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.449618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.449674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.449904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.449958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.450224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.450291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.450583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.450642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.450948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.451028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.451286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.451359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.451607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.451684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.451977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.452049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.452314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.452370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.452568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.452636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.452898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.452971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.453220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.453295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.453513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.453583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.453832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.894 [2024-12-10 22:58:28.453909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.894 qpair failed and we were unable to recover it. 00:27:20.894 [2024-12-10 22:58:28.454194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.454267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.454475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.454529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.454764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.454839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.455124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.455206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.455423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.455481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.455742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.455816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.456116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.456192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.456422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.456480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.456780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.456853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.457054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.457128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.457360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.457416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.457651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.457731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.458019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.458102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.458362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.458419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.458604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.458671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.458933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.458990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.459258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.459313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.459568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.459624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.459807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.459863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.460064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.460119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.460354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.460412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.460710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.460784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.461110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.461171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.461411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.461468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.461697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.461774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.461987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.462043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.462267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.462324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.462534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.462623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.462856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.462914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.463147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.463204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.463419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.463475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.463738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.895 [2024-12-10 22:58:28.463814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.895 qpair failed and we were unable to recover it. 00:27:20.895 [2024-12-10 22:58:28.464042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.464116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.464349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.464404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.464597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.464668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.464934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.465009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.465239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.465303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.465541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.465607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.465874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.465961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.466209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.466285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.466470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.466526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.466780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.466855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.467078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.467150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.467366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.467423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.467624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.467680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.467878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.467932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.468172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.468478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.468533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.468777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.468851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.469146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.469219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.469445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.469500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.469702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.469777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.469990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.470044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.470221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.470283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.470504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.470594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.470788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.470870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.471121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.471196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.471423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.471479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.471724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.471797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.472045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.472120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.472315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.472372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.472614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.472693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.472914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.472979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.473168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.473222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.473451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.473515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.473761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.473815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.474055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.896 [2024-12-10 22:58:28.474112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.896 qpair failed and we were unable to recover it. 00:27:20.896 [2024-12-10 22:58:28.474334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.474389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.475930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.475995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.476262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.476319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.476563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.476624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.476856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.476931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.477150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.477222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.477443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.477497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.477684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.477740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.478005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.478078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.478330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.478384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.478627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.478722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.479014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.479089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.479355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.479420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.479674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.479749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.480057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.480138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.480368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.480424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.480675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.480749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.481040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.481115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.481333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.481388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.481680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.481756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.482021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.482095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.482330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.482387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.482604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.482684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.482956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.483029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.483259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.483328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.483601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.483681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.483937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.484011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.484215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.484281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.484540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.484606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.484829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.484901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.485167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-12-10 22:58:28.485222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-12-10 22:58:28.485454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.485523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.485770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.485848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.486103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.486182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.486448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.486502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.486851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.486928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.487187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.487241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.487419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.487475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.487747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.487830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.488137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.488213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.488485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.488566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.488798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.488871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.489122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.489197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.489480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.489535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.489808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.489890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.490186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.490259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.490435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.490490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.490764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.490839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.491045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.491122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.491368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.491423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.491692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.491769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.492050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.492126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.492408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.492463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.492708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.492781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.493117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.493177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.493361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.493417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.493733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.493810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.494103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.494179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.494429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.494484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-12-10 22:58:28.494876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-12-10 22:58:28.494942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.495250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.495324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.495514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.495582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.495786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.495863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.496128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.496201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.496470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.496525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.496801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.496886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.497089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.497163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.497425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.497491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.497737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.497794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.498007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.498068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.498262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.498319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.498563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.498620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.498820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.498877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.499099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.499154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.499333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.499388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.499622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.499679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.499864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.499920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.500135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.500193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.500368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.500423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.500670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.500728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.500968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.501024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.501261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.501317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-12-10 22:58:28.501491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-12-10 22:58:28.501557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.501755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.501811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.502049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.502104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.502322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.502381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.502635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.502692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.502946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.503022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.503235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.503289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.503563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.503621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.503857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.503935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.504155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.504230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.504436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.504502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.504774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.504858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.505083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.505158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.505431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.505489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.505732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.505809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.506060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.506135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.506354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.506411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 22:58:28.506655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-12-10 22:58:28.506737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.506978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.507035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.507266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.507330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.507578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.507635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.507933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.508001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.508226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.508281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.508501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.508584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.508809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.508870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.509138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.509222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.509452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.509507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.509747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.509824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.510081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.510156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.510397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.510453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.510726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.510816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.511102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.511179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.511449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.511504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.511794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.511883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.512188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.512261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.512471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.512526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.512758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.512834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.513142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.513206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.513478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.513533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.513820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.513902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.514167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-12-10 22:58:28.514242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 22:58:28.514477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.514533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.514790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.514864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.515177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.515250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.515509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.515599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.515851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.515924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.516152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.516228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.516392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.516449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.516679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.516756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.517053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.517122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.517387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.517443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.517733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.517808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.518112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.518187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.518385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.518452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.518765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.518842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.519033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.519090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.519315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.519370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.519599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.519659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.519890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.519965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.520210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.520282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.520558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.520615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.520910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.520985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.521274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.521345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.521524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.521594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.521834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-12-10 22:58:28.521908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 22:58:28.522209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.522291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.522592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.522649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.522912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.522966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.523217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.523291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.523488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.523559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.523813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.523899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.524194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.524268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.524529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.524600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.524916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.524993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.525290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.525363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.525587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.525645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.525857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.525937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.526220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.526278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.526505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.526601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.526835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.526890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.527101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.527156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.527383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.527443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.527701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.527775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.528060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.528136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.528385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.528442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.528723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.528800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.529023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.529079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.529288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.529342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.529566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-12-10 22:58:28.529622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-12-10 22:58:28.529797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.529853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.530077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.530134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.530372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.530429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.530721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.530821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.531146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.531215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.531515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.531626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.531887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.531943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.532190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.532256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.532570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.532650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.532908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.532965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.533172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.533248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.533541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.533623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.533907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.533968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.534264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.534330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.534633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.534693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.534934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.534993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.535266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.535345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.535615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.535673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.535900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.535970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.536272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.536344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.536592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.536651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.536876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.536933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.537212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-12-10 22:58:28.537282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-12-10 22:58:28.537570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.537645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.537911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.537969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.538194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.538266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.538615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.538674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.538900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.538956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.539239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.539315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.539622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.539680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.539932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.539988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.540241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.540309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.540627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.540686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.540918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.540975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.541179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.541235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.541498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.541802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.541863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.542135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.542200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.542440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.542506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.542823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.542884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.543203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.543270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.543569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.543655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.543885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.543942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.544278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.544390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.544664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.544725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.544960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.545016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.545281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-12-10 22:58:28.545363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-12-10 22:58:28.545590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.545649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.545919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.545995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.546244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.546320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.546584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.546641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.546901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.546985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.547278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.547351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.547575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.547631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.547886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.547963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.548258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.548333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.548570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.548638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.548928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.549002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.549245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.549317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.549477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.549532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.549737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.549798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.550051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.550107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.550287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.550341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.550573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.550630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.550878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.550932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.551181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.551253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.551522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.551596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.551843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.551919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.552240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.552315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.552519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.552591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.552863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.552936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-12-10 22:58:28.553216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-12-10 22:58:28.553290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.553485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.553542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.553818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.553894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.554079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.554137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.554374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.554431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.554712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.554786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.554978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.555057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.555278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.555333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.555558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.555615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.555906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.555980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.556291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.556346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.556530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.556611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.556884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.556984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.557296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.557368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.557696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.557757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.557978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.558047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.558295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.558363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.558660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.558723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.559001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.559057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-12-10 22:58:28.559314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-12-10 22:58:28.559381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.559650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.559707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.559990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.560059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.560335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.560401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.560634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.560692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.560984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.561058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.561345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.561425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.561722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.561780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.562085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.562151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.562472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.562529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.562776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.562833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.563088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.563154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.563399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.563472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.563790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.563848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.564114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.564171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.564403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.564475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.564753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.564810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.565086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.565141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.565409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.565493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.565759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.565818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.566028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.566084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.566372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.566438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.566717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-12-10 22:58:28.566784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-12-10 22:58:28.567027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.567093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.567372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.567437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.567720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.567778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.568072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.568149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.568416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.568483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.568740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.568797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.569051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.569107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.569432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.569500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.569775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.569832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.570080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.570145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.570440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.570522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.570807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.570865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.571065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.571131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.571396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.571460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.571695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.571755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.571988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.572056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.572353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.572418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.572681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.572745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.573004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.573062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.573274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.573331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.573563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.573621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.573887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.573946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-12-10 22:58:28.574242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-12-10 22:58:28.574306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.574603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.574672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.574888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.574956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.575220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.575284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.575524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.575638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.575833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.575889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.576115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.576172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.576429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.576486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.576728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.576786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.577080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.577164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.577437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.577504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.577743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.577800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.578094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.578159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.578423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.578492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.578779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.578845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.579154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.579218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.579416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.579489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.579798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.579867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.580155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.580221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.580475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.580542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.580803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.580872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.581142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.581209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.581409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.581473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.581793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.581874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-12-10 22:58:28.582180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-12-10 22:58:28.582247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.582562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.582633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.582884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.582949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.583262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.583329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.583602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.583682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.583989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.584056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.584323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.584399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.584648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.584716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.585004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.585070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.585328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.585409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.585699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.585766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.586052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.586118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.586375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.586440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.586709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.586777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.587006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.587072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.587324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.587393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.587688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.587768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.588033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.588100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.588407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.588763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.588843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.589085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.589152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.589451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.589518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.589793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.589863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.590177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-12-10 22:58:28.590245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-12-10 22:58:28.590529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.590619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.590869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.590936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.591217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.591285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.591502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.591603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.591860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.591928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.592208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.592287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.592584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.592653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.592964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.593031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.593325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.593396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.593701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.593770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.594025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.594092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.594299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.594367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.594664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.594734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.595036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.595103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.595318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.595385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.595649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.595729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.596005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.596073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.596385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.596451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.596744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.596814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.597063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.597132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.597384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.597460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.597694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.597761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.597998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.598072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.598301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.598368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.598621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-12-10 22:58:28.598694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-12-10 22:58:28.598919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-12-10 22:58:28.598985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-12-10 22:58:28.599278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-12-10 22:58:28.599346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-12-10 22:58:28.599607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-12-10 22:58:28.599675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-12-10 22:58:28.599925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-12-10 22:58:28.599989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-12-10 22:58:28.600300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-12-10 22:58:28.600367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.600662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.600732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.600991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.601054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.601353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.601418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.601756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.601825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.602055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.602120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.602374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.602440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.602744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.602827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.603108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.603173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.603432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.603498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.603802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.603883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.604150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.604215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.604457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.604523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.604795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.604860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.605129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.605198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.605409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.605479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.605801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.605869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.606118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.606183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.606474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.606541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.606857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.606923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.607173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.607238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.607576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.607658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.607930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.607996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.608217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.608283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.608537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.608622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.608883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.608952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.609170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.609236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.609483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.609566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.609879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.609958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.610228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.610295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.610587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.610656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.610946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.611023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.611302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.611371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.611601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.611670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.611960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.612026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.612234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.612306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.612609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.612679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.612977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.613042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.613269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.613334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.613576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.613644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.613991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.614058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.186 [2024-12-10 22:58:28.614366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.186 [2024-12-10 22:58:28.614431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.186 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.614711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.614790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.615072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.615139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.615447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.615514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.615841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.615906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.616164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.616231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.616524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.616610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.616871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.616937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.617189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.617271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.617499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.617582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.617808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.617875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.618143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.618226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.618497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.618580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.618851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.618916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.619213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.619283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.619598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.619668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.619964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.620029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.620305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.620372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.620605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.620676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.620982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.621048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.621296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.621362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.621623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.621696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.622018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.622085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.622391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.622457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.622736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.622805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.623070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.623149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.623405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.623470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.623754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.623820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.624115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.624190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.624468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.624533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.624777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.624853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.625159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.625225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.625483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.625584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.625905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.625970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.626222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.626290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.626594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.626666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.626985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.627051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.627348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.627414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.627630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.627698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.627979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.628048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.628304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.628370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.628615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.628683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.628898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.628964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.629283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.629352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.629676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.629743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.630031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.630095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.630372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.187 [2024-12-10 22:58:28.630440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.187 qpair failed and we were unable to recover it. 00:27:21.187 [2024-12-10 22:58:28.630741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.630810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.631102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.631166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.631395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.631462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.631706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.631776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.632028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.632095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.632359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.632438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.632756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.632825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.633111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.633177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.633456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.633522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.633849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.633918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.634185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.634250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.634497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.634582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.634891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.634968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.635193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.635259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.635459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.635524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.635764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.635836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.636134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.636209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.636535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.636619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.636892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.636957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.637240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.637304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.637610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.637681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.637932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.637997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.638219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.638286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.638580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.638660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.638987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.639055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.639344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.639409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.639694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.639766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.640005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.640072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.640375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.640442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.640757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.640824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.641119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.641186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.641459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.641525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.641808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.641874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.642138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.642212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.642524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.642618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.642909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.642974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.643271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.643352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.643634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.643702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.643997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.644061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.644307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.644378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.644668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.644738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.644968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.645033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.645327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.645392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.645619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.645702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.645990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.646055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.646321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.188 [2024-12-10 22:58:28.646386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.188 qpair failed and we were unable to recover it. 00:27:21.188 [2024-12-10 22:58:28.646640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.646710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.646913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.646979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.647279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.647346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.647646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.647715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.647946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.648011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.648284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.648353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.648670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.648739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.649013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.649079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.649330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.649398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.649666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.649733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.649969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.650036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.650274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.650339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.650636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.650704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.650958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.651026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.651329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.651395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.651650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.651735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.652010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.652075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.652323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.652399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.652664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.652736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.653065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.653133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.653338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.653405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.653700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.653768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.654030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.654100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.654323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.654389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.654641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.654709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.655013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.655091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.655413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.655480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.655750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.655816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.656036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.656102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.656387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.656456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.656704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.656771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.657052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.657118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.657330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.657407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.657655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.657723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.658024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.658090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.658355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.658420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.658673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.658742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.659045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.659110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.189 qpair failed and we were unable to recover it. 00:27:21.189 [2024-12-10 22:58:28.659335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.189 [2024-12-10 22:58:28.659403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.659698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.659784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.660040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.660104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.660410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.660474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.660708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.660782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.661047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.661115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.661427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.661492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.661784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.661854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.662078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.662145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.662361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.662429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.662695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.662763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.663056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.663121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.663409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.663478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.663758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.663825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.664112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.664177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.664420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.664505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.664812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.664880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.665147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.665213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.665480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.665581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.665877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.665955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.666177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.666245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.666451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.666517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.666790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.666877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.667092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.667160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.667455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.667520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.667812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.667878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.668111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.668194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.668416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.668482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.668801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.668868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.669132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.669212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.669531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.669631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.669915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.669980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.670176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.670242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.670532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.670623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.670867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.670932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.671192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.671258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.671503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.671586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.671919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.671984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.672283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.672350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.672606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.672678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.672975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.673044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.673272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.673339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.673649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.673715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.674008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.674078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.674384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.674452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.190 [2024-12-10 22:58:28.674730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.190 [2024-12-10 22:58:28.674798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.190 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.675081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.675155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.675428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.675496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.675822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.675889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.676129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.676194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.676448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.676516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.676854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.676921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.677284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.677354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.677647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.677716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.677986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.678052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.678300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.678364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.678621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.678690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.679007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.679074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.679325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.679389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.679640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.679722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.679959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.680041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.680287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.680355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.680655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.680723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.681018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.681083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.681319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.681388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.681646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.681714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.681968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.682033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.682326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.682405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.682706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.682776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.683041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.683109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.683378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.683445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.683747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.683817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.684093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.684158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.684431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.684497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.684729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.684814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.685051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.685120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.685362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.685430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.685691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.685760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.686074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.686141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.686400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.686466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.686744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.686812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.687073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.687138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.687402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.687471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.687747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.687814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.688024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.688090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.688286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.688349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.688643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.688714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.688943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.689009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.689189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.689254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.689474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-10 22:58:28.689611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.191 qpair failed and we were unable to recover it. 00:27:21.191 [2024-12-10 22:58:28.689907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.689975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.690221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.690286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.690568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.690636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.690938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.691006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.691278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.691344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.691612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.691678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.691938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.692006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.692253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.692322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.692536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.692623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.692875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.692952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.693220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.693289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.693569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.693643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.693894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.693959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.694208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.694287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.694619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.694689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.694922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.694986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.695274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.695340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.695639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.695708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.696015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.696082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.696303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.696369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.696624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.696692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.697008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.697076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.697341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.697406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.697644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.697712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.697938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.698005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.698275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.698340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.698642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.698711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.698926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.699007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.699255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.699322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.699580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.699647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.699949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.700015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.700281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.700351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.700665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.700732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.700982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.701047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.701308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.701377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.701648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.701719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.701943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.702008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.702235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.702299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.702575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.702652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.702937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.703005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.703275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.703340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.703576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.703644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.703901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.703982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.704224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.704290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.704575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-10 22:58:28.704643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.192 qpair failed and we were unable to recover it. 00:27:21.192 [2024-12-10 22:58:28.704901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.704966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.705212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.705280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.705577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.705645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.705895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.705960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.706206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.706292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.706593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.706663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.706937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.707002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.707246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.707312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.707524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.707610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.707951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.708019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.708272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.708338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.708634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.708701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.708968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.709035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.709306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.709373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.709637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.709704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.709953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.710031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.710290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.710355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.710673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.710741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.711013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.711093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.711328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.711395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.711689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.711756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.712009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.712077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.712344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.712428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.712693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.712761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.713045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.713110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.713385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.713450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.713787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.713856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.714122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.714187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.714439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.714503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.714752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.714820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.715048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.715116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.715347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.715412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.715663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.715731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.716042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.716113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.716395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.716463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.716741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.716807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.717065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.717133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.717397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.717466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.193 [2024-12-10 22:58:28.717757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-10 22:58:28.717826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.193 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.718087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.718152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.718440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.718510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.718777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.718844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.719094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.719159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.719424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.719490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.719758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.719837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.720140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.720206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.720507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.720591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.720866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.720943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.721185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.721251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.721493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.721580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.721882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.721947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.722156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.722223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.722441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.722505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.722775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.722842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.723146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.723223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.723458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.723524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.723809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.723874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.724171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.724237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.724504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.724589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.724864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.724931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.725191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.725257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.725574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.725654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.725912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.725980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.726236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.726302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.726617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.726685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.726975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.727057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.727319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.727385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.727627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.727695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.727939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.728004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.728280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.728348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.728576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.728643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.728911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.728978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.729271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.729337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.729628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.729697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.729955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.730020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.730239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.730305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.730542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.730641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.730978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.731046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.731271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.731336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.731626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.731694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.731903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.731972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.732224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.732290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.732535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.732619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.732867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.194 [2024-12-10 22:58:28.732943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.194 qpair failed and we were unable to recover it. 00:27:21.194 [2024-12-10 22:58:28.733220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.733297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.733577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.733646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.733898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.733964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.734252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.734318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.734600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.734668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.734985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.735052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.735341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.735418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.735701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.735770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.736032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.736100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.736361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.736426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.736677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.736754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.737000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.737065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.737335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.737399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.737661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.737729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.738009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.738078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.738368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.738433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.738750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.738818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.739117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.739186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.739442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.739509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.739791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.739857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.740154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.740222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.740526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.740612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.740824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.740891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.741112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.741178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.741425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.741496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.741837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.741904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.742167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.742232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.742540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.742656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.742945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.743012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.743205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.743270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.743527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.743617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.743886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.743954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.744204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.744271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.744582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.744650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.744921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.744998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.745289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.745357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.745594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.745673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.745939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.746006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.746244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.746319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.746519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.746608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.746908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.746984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.747308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.747374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.747629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.747695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.747983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.748049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.195 [2024-12-10 22:58:28.748306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.195 [2024-12-10 22:58:28.748371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.195 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.748680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.748749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.749044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.749110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.749371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.749435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.749690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.749758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.750058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.750124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.750333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.750397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.750656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.750725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.750982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.751047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.751307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.751372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.751678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.751745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.752046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.752111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.752314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.752380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.752684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.752752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.753054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.753120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.753335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.753399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.753661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.753728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.754026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.754092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.754363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.754427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.754726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.754793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.755085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.755151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.755454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.755518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.755812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.755880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.756182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.756249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.756562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.756629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.756896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.756962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.757223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.757289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.757542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.757626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.757852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.757917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.758134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.758202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.758501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.758603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.758917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.758983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.759224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.759289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.759604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.759673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.759966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.760031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.760278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.760344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.760636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.760704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.761019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.761084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.761347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.761411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.761634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.761702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.761980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.762244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.762310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.762613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.762681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.762946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.763011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.763272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.196 [2024-12-10 22:58:28.763337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.196 qpair failed and we were unable to recover it. 00:27:21.196 [2024-12-10 22:58:28.763577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.763645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.763891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.763956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.764176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.764240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.764476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.764541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.764818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.764885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.765188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.765254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.765559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.765628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.765919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.765984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.766239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.766304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.766596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.766665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.766915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.766980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.767192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.767260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.767577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.767956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.768021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.768325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.768390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.768637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.768704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.769009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.769075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.769266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.769333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.769625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.769703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.769992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.770057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.770363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.770430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.770732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.770799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.771017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.771084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.771385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.771450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.771724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.771791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.772047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.772112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.772371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.772440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.772703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.772771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.773075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.773143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.773397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.773462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.773732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.773798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.773992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.774057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.774375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.774441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.774731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.774798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.775064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.775131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.775426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.775491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.775815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.775882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.197 qpair failed and we were unable to recover it. 00:27:21.197 [2024-12-10 22:58:28.776188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.197 [2024-12-10 22:58:28.776255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.776471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.776536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.776804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.776868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.777125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.777190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.777491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.777573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.777801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.777866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.778170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.778234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.778497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.778580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.778863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.778930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.779179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.779246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.779504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.779588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.779855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.779921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.780185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.780250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.780494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.780579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.780851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.780917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.781179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.781244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.781500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.781582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.781828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.781893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.782085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.782150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.782439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.782503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.782806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.782872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.783122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.783199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.783495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.783581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.783898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.783963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.784263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.784329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.784580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.784647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.784912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.784977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.785246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.785313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.785573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.785640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.785945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.786011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.786272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.786336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.786543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.786641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.786925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.786990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.787187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.787251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.787503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.787587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.787905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.787971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.788229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.788294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.788496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.788579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.788872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.788937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.789212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.789282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.789534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.789617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.789907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.789973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.790257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.790323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.790601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.790668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.198 [2024-12-10 22:58:28.790950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.198 [2024-12-10 22:58:28.791015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.198 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.791314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.791378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.791591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.791659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.791882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.791948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.792257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.792322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.792578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.792644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.792889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.792954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.793256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.793321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.793630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.793695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.793991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.794056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.794351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.794415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.794668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.794734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.795039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.795103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.795396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.795461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.795769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.795834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.796054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.796119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.796365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.796430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.796650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.796732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.796940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.797008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.797268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.797334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.797625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.797691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.797960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.798024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.798265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.798330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.798628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.798695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.798950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.799014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.799234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.799300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.799567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.799635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.799905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.799971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.800265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.800331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.800592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.800660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.800920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.800986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.801301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.801367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.801613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.801681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.801984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.802048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.802298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.802365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.802672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.802739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.803000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.803065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.803263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.803329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.803533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.803613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.803921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.803987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.804239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.804306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.804566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.804633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.804928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.804993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.805250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.805316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.805579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.199 [2024-12-10 22:58:28.805646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-12-10 22:58:28.805874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.805939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.806185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.806250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.806519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.806616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.806865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.806930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.807195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.807260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.807563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.807629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.807929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.807995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.808250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.808315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.808616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.808682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.808972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.809037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.809340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.809406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.809663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.809730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.809981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.810059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.810321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.810386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.810684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.810751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.811007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.811072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.811366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.811431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.811635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.811701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.811938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.812003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.812273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.812338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.812628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.812694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.812943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.813007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.813307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.813373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.813673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.813739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.814026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.814091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.814380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.814445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.814763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.814830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.815103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.815167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.815422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.815488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.815796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.815863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.816129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.816195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.816491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.816575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.816875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.816941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.817237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.817302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.817609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.817676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.817896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.817963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.818218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.818283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.818512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.818608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.818912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.818978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.819290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.819354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.819609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.819675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.819967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.820033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.820297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.820362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.820635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.200 [2024-12-10 22:58:28.820702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-12-10 22:58:28.820908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.820975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.821271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.821336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.821561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.821628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.821936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.822001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.822298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.822364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.822663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.822730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.823036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.823103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.823399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.823464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.823735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.823814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.824085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.824153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.824491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.824693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.824760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.824957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.825023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.825269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.825335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.825593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.825661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.825953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.826016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.826310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.826376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.826621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.826687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.826960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.827028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.827283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.827348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.827604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.827670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.827961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.828025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.828346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.828412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.828628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.828694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.828966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.829031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.829325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.829390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.829651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.829718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.829977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.830043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.830298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.830363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.830620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.830686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.830947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.831015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.831319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.831385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.831640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.831707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.831913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.831981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.832241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.832309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.832621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.832688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.832949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.833016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.833260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.833324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-12-10 22:58:28.833618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.201 [2024-12-10 22:58:28.833686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.833980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.834045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.834305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.834369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.834614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.834683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.835011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.835075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.835332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.835397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.835647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.835714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.836003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.836067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.836339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.836407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.836672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.836739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.836985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.837060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.837354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.837418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.837678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.837745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.837989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.838055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.838305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.838370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.838612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.838678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.838981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.839047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.839361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.839426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.839685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.839751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.840047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.840112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.840364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.840429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.840719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.840784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.840999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.841064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.841332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.841397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.841708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.841774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.842097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.842363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.842428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.842654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.842722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.843025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.843092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.843293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.843362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.843610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.843677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.843968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.844033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.844334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.844400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.844696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.844763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.845008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.845074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.845334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.845398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.845657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.845725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.846028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.846092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.846390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.846455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.846721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.846787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.847039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.847104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.847312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.847380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.847636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.847703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.847939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.848004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.848290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.848358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.848621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.848688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.848990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.202 [2024-12-10 22:58:28.849054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.202 qpair failed and we were unable to recover it. 00:27:21.202 [2024-12-10 22:58:28.849349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.849414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.849688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.849757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.850026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.850090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.850300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.850375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.850672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.850740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.851044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.851109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.851361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.851426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.851634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.851701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.851964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.852029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.852307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.852375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.852634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.852699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.852957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.853022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.853317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.853382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.853645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.853712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.853986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.854052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.854293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.854359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.854653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.854718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.855000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.855066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.855317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.855384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.855640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.855707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.856001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.856065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.856370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.856435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.856742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.857119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.857184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.857385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.857768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.857834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.858130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.858195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.858489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.858588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.858892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.858957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.859219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.859285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.859600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.859669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.859918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.859983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.860234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.860298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.860564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.860633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.860861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.860927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.861227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.861292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.861560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.861628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.861937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.862002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.862297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.862362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.862598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.862666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.862886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.862951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.863193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.863258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.863574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.863642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.863863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.863944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.203 [2024-12-10 22:58:28.864240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.203 [2024-12-10 22:58:28.864305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.203 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.864607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.864676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.864970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.865035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.865338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.865403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.865659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.865726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.866031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.866097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.866349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.866414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.866625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.866692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.866943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.867008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.867307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.867372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.867667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.867732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.867991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.868056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.868305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.868369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.868646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.868713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.868953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.869018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.869320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.869385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.869625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.869691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.869953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.870021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.870321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.870387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.870611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.870678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.870918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.870984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.871222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.871288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.871577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.871643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.871942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.872007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.872252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.872317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.872620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.872687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.872984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.873050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.873344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.873409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.873669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.873736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.874039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.874105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.874320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.874387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.874658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.874724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.875014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.875078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.875388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.875454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.875764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.875831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.876084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 179890 Killed "${NVMF_APP[@]}" "$@" 00:27:21.204 [2024-12-10 22:58:28.876149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.876343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.876409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.876724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.876791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:21.204 [2024-12-10 22:58:28.877085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.877160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:21.204 [2024-12-10 22:58:28.877449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.877515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b9 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.204 0 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.877831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.204 [2024-12-10 22:58:28.877896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.204 [2024-12-10 22:58:28.878211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.878277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.878467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.204 [2024-12-10 22:58:28.878533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.204 qpair failed and we were unable to recover it. 00:27:21.204 [2024-12-10 22:58:28.878796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.878862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.879155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.879220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.879485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.879566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.879846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.879912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.880201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.880266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.880515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.880594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.880819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.880885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.881139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.881204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.881574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.881840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.881906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.882220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.882286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.882561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.882597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.882736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.882771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.882877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.882911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.883056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.883092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.883236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.883270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.883440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.883506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.883691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.883725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.883911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.883975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.884293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.884360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.884622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.884664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=180440 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 180440 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 180440 ']' 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.205 22:58:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.205 [2024-12-10 22:58:28.886174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.886209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.886374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.886404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.886503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.886531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.886669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.886702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.886891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.886944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.887035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.887068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.887195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.887223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.887349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.887379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.887538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.887584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.887735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.887763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.887893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.887921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.888042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.888071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.888203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.888231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.205 qpair failed and we were unable to recover it. 00:27:21.205 [2024-12-10 22:58:28.888329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.205 [2024-12-10 22:58:28.888358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.888464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.888491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.888621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.888651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.888756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.888784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.888878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.888907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.888997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.889127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.889279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.889398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.889531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.889674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.889819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.889969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.889996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.890120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.890148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.890312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.890339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.890460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.890489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.890606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.890635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.890747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.890776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.890901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.890929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.891055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.891084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.891183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.891211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.891317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.891346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.891451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.891479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.891585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.891613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.891724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.891760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.891886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.891914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.892013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.892041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.892162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.892192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.892300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.892328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.892423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.892451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.892573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.892602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.892700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.892733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.892869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.892897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.893055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.893083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.893168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.893196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.893322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.893351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.893500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.893528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.893635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.893662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.893786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.893822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.893978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.894007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.894106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.894135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.894299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.894329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.894450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.894478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.894634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.894662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.894750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.894782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.894874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.206 [2024-12-10 22:58:28.894901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.206 qpair failed and we were unable to recover it. 00:27:21.206 [2024-12-10 22:58:28.895021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.895047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.895187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.895214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.895322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.895354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.895469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.895494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.895588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.895616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.895729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.895774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.895886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.895914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.896031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.896058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.896151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.896178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.896335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.896365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.896454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.896481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.896605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.896633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.896752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.896781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.896896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.896925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.897013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.897041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.897125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.897154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.897245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.897278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.897382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.897411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.897608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.897635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.897728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.897754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.897874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.897900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.898025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.898050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.898160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.898188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.898312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.898346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.898468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.898496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.898610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.898636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.898747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.898787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.898900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.898935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.899081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.899130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.899279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.899321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.899417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.899448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.899577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.899627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.899742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.899768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.899915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.899952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.900095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.900128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.900277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.900313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.900430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.900457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.900558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.900584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.900705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.900736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.900826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.900852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.900954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.900979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.207 [2024-12-10 22:58:28.901173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-12-10 22:58:28.901206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.207 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.901411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.901462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.901565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.901609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.901695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.901722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.901840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.901866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.901957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.902012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.902123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.902170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.902322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.902357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.902476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.902502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.902602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.902659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.902767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.902799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.902952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.902978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.903092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.903118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.903262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.903292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.903407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.903433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.903535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.903570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.903698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.903727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.903821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.903848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.903942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.903968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.904059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.904084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.904240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.904267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.904344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.904373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.904497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.904522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.904619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.904646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.904734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.497 [2024-12-10 22:58:28.904760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.497 qpair failed and we were unable to recover it. 00:27:21.497 [2024-12-10 22:58:28.904849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.904875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.905891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.906031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.906055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.906245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.906270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.906390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.906415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.906555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.906580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.906681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.906706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.906808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.906833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.906940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.906965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.907944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.907968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.908107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.908216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.908344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.908482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.908598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.908738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.498 [2024-12-10 22:58:28.908886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.498 qpair failed and we were unable to recover it. 00:27:21.498 [2024-12-10 22:58:28.908974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.908999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.909119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.909145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.909264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.909290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.909383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.909409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.909536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.909577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.909667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.909694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.909823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.909851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.909981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.910021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.910140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.910174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.910276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.910309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.910473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.910507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.910619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.910645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.910728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.910759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.910925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.910968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.911071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.911098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.911243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.911291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.911407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.911432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.911560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.911585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.911668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.911692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.911780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.911804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.911930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.911954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.912044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.912068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.912151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.912176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.912312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.912337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.912475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.912501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.912603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.912628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.912745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.912770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.912888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.912912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.913001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.913027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.913166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.913191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.913275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.913300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.913382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.913406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.913492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.913518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.913607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.913632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.499 qpair failed and we were unable to recover it. 00:27:21.499 [2024-12-10 22:58:28.913723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.499 [2024-12-10 22:58:28.913747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.913840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.913870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.913984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.914119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.914230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.914363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.914520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.914676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.914813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.914958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.914982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.915126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.915239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.915357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.915460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.915602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.915735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.915861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.915997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.916868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.916980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.500 qpair failed and we were unable to recover it. 00:27:21.500 [2024-12-10 22:58:28.917949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.500 [2024-12-10 22:58:28.917974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.918942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.918967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.919953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.919977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.920947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.921886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.921911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.922053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.922077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.922196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.501 [2024-12-10 22:58:28.922221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.501 qpair failed and we were unable to recover it. 00:27:21.501 [2024-12-10 22:58:28.922311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.922335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.922444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.922469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.922580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.922605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.922696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.922721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.922832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.922856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.922972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.922997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.923089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.923115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.923204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.923229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.923368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.923393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.923536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.923572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.923686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.923711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.923829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.923854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.923935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.923959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.924899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.924924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.502 [2024-12-10 22:58:28.925912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.502 [2024-12-10 22:58:28.925938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.502 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.926101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.926220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.926355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.926471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.926726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.926862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.926979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.927962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.927987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.928107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.928273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.928404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.928521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.928636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.928774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.928914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.928998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.929950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.929975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.930059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.930083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.930194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.503 [2024-12-10 22:58:28.930218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.503 qpair failed and we were unable to recover it. 00:27:21.503 [2024-12-10 22:58:28.930336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.930361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.930444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.930469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.930557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.930582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.930703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.930727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.930806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.930830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.930916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.930940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.931047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.931194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.931331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.931462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.931596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.931743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.931882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.931996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.932922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.932946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.933906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.933932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.934023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.934048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.934147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.934172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.934258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.934285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.934369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.934394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.934507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.504 [2024-12-10 22:58:28.934531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.504 qpair failed and we were unable to recover it. 00:27:21.504 [2024-12-10 22:58:28.934623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.934649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.934794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.934819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.934939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.934965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.935972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.935997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.936133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.936271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.936386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.936519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.936677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936678] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:27:21.505 [2024-12-10 22:58:28.936761] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.505 [2024-12-10 22:58:28.936765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.936792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.936913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.936992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.937130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.937263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.937424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.937531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.937645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.937780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.937900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.937927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.938025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.938050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.938161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.938187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.938295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.938321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.938403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.938429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.938511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.938540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.938654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.938683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.505 [2024-12-10 22:58:28.938807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.505 [2024-12-10 22:58:28.938833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.505 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.938935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.938962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.939081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.939106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.939204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.939229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.939318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.939344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.939452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.939482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.939601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.939629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.939742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.939768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.939882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.939907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.940956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.940981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.941971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.941997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.942106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.942132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.942243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.942269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.942358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.942384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.942496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.942522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.942651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.942677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.942756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.942781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.942905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.942930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.943037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.943063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.943145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.943171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.943253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.506 [2024-12-10 22:58:28.943279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.506 qpair failed and we were unable to recover it. 00:27:21.506 [2024-12-10 22:58:28.943368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.943394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.943481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.943507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.943630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.943656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.943763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.943788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.943887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.943912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.944954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.944979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.945140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.945253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.945374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.945479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.945623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.945750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.945860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.945975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.946954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.946978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.947060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.947084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.507 qpair failed and we were unable to recover it. 00:27:21.507 [2024-12-10 22:58:28.947168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.507 [2024-12-10 22:58:28.947193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.947309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.947334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.947459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.947483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.947577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.947602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.947714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.947740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.947825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.947850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.947961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.947986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.948944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.948969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.949895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.949921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.950928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.950953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.951069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.951094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.951235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.951261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.951348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.951374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.951477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.951516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.508 qpair failed and we were unable to recover it. 00:27:21.508 [2024-12-10 22:58:28.951638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.508 [2024-12-10 22:58:28.951677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.951772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.951808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.951928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.951954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.952041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.952072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.952156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.952181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.952293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.952320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.952407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.952434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.952554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.952581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.952688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.952737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.952868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.952916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.953949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.953974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.954933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.954959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.955033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.955058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.955143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.955168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.955275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.955301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.955389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.955414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.955543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb4f30 is same with the state(6) to be set 00:27:21.509 [2024-12-10 22:58:28.955668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.955704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.955803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.955830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.955947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.955973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.956120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.956147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.509 [2024-12-10 22:58:28.956253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.509 [2024-12-10 22:58:28.956284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.509 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.956409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.956437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.956599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.956642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.956748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.956783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.956918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.956952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.957152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.957199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.957313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.957339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.957489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.957514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.957623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.957657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.957841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.957892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.958938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.958964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.959084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.959111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.959255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.959281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.959367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.959392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.959477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.959502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.959600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.959626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.959717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.959742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.959890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.959918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.960908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.960935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.961045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.961071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.961196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.961222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.510 qpair failed and we were unable to recover it. 00:27:21.510 [2024-12-10 22:58:28.961313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.510 [2024-12-10 22:58:28.961353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.961441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.961474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.961557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.961590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.961705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.961731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.961841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.961867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.961954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.961981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.962959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.962986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.963129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.963265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.963412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.963523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.963668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.963787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.963901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.963997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.964096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.964239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.964372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.964487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.964680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.964791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.964909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.964944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.965064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.965090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.965170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.965195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.965273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.965298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.965384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.965410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.965502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.965530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.965660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.965687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.511 qpair failed and we were unable to recover it. 00:27:21.511 [2024-12-10 22:58:28.965775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.511 [2024-12-10 22:58:28.965803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.965910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.965936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.966079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.966104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.966219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.966245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.966345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.966371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.966487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.966512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.966634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.966681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.966788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.966834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.966969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.967897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.967980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.968125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.968245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.968369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.968501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.968629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.968764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.968896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.968921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.512 qpair failed and we were unable to recover it. 00:27:21.512 [2024-12-10 22:58:28.969899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.512 [2024-12-10 22:58:28.969925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.970070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.970097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.970251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.970344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.970371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.970484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.970510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.970616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.970650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.970776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.970822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.970952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.970997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.971135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.971168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.971295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.971320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.971429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.971455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.971576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.971603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.971679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.971704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.971794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.971819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.971910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.971935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.972957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.972982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.973100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.973242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.973381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.973493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.973644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.973760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.973907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.973992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.513 [2024-12-10 22:58:28.974861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.513 qpair failed and we were unable to recover it. 00:27:21.513 [2024-12-10 22:58:28.974952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.974978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.975908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.975987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.976911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.976989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.977014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.977102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.977127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.977228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.977257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.977350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.977375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.977520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.977558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.977701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.977727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.977839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.977865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.977978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.978973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.978997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.979119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.979145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.979236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.979264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-12-10 22:58:28.979347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.514 [2024-12-10 22:58:28.979373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.979473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.979500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.979588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.979622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.979743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.979771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.979889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.979915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.980932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.980956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.981965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.981991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.982077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.982101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.982215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.982240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.982355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.982379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.982494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.982524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.982651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.982676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.982765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.982791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.982910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.982935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.983049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.983075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.983184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.983209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.983320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.983344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.983435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.983461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.983597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.983623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.983742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.983768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.983882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.983906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-12-10 22:58:28.984018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.515 [2024-12-10 22:58:28.984043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.984119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.984144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.984254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.984371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.984399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.984489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.984515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.984644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.984677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.984768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.984794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.984909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.984935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.985935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.985959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.986136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.986278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.986412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.986517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.986644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.986755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.986876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.986983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.987959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.987985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.988181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.988208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.988328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.988358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.988451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.516 [2024-12-10 22:58:28.988478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.516 qpair failed and we were unable to recover it. 00:27:21.516 [2024-12-10 22:58:28.988604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.988631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.988741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.988767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.988862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.988888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.989961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.989986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.990942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.990968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.991111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.991233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.991356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.991488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.991616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.991726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.991886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.991977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.992003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.992089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.992114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.992198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.992223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.517 [2024-12-10 22:58:28.992337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.517 [2024-12-10 22:58:28.992363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.517 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.992480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.992505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.992592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.992618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.992729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.992753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.992894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.992919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.993960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.993985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.994934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.994959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.995074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.995100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.995208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.995233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.995354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.995379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.995476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.995506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.995598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.995627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.995768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.995795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.995909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.995936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.996048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.996073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.996173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.996199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.996290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.996315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.996434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.996464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.996581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.996609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.996702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.996728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.518 [2024-12-10 22:58:28.996812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.518 [2024-12-10 22:58:28.996838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.518 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.996951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.996976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.997967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.997994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.998111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.998137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.998228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.998254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.998339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.998364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.998480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.998506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.998614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.998640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.998740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.998766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.998872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.998897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:28.999891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:28.999926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.000047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.000073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.000212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.000239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.000323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.000349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.000510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.000536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.000633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.000658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.000770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.000795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.000908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.000934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.001015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.001041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.001122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.001147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.001238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.001264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.001349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.519 [2024-12-10 22:58:29.001373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.519 qpair failed and we were unable to recover it. 00:27:21.519 [2024-12-10 22:58:29.001489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.001514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.001649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.001675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.001764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.001789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.001937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.001963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.002842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.002866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.003904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.003931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.004908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.004935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.520 [2024-12-10 22:58:29.005924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.520 [2024-12-10 22:58:29.005949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.520 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.006935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.006961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.007962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.007986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.008971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.008995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.009923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.009951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.010065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.010089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.010174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.010197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.010311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.010336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.521 qpair failed and we were unable to recover it. 00:27:21.521 [2024-12-10 22:58:29.010432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.521 [2024-12-10 22:58:29.010455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.010542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.010573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.010656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.010681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.010772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.010798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.010878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.010902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.011942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.011969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.012057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.012084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.012170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.012195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.012298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.012336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.012462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.012489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.012585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.012611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.012766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.012791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.012880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.012905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.013904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.013994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.014019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.014103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.014128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.014239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.014264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.014357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.014382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.014484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.522 [2024-12-10 22:58:29.014492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.522 [2024-12-10 22:58:29.014521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.522 qpair failed and we were unable to recover it. 00:27:21.522 [2024-12-10 22:58:29.014616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.014644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.014741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.014768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.014859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.014886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.015961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.015988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.016124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.016236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.016353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.016500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.016632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.016742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.016887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.016978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.017117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.017240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.017348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.017494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.017613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.017759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.017903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.017930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.018861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.018975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.019001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.019089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.019116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.019217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.019242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.019352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.019378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.019461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.523 [2024-12-10 22:58:29.019485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.523 qpair failed and we were unable to recover it. 00:27:21.523 [2024-12-10 22:58:29.019560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.019584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.019673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.019697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.019810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.019836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.019935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.019962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.020068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.020095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.020224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.020250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.020347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.020374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.020484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.020510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.020623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.020651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.020792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.020819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.020900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.020924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.021909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.021996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.022158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.022270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.022409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.022518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.022673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.022783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.022896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.022922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.023089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.023129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.023233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.023268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.023391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.023418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.023534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.023566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.023657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.023682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.023770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.023794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.023896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.023925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.024020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.024051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.024206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.024236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.024357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.024384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.024478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.524 [2024-12-10 22:58:29.024503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.524 qpair failed and we were unable to recover it. 00:27:21.524 [2024-12-10 22:58:29.024629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.024657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.024747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.024771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.024857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.024882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.024980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.025005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.025123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.025149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.025263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.025290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.025413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.025443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.025584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.025618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.025709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.025734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.025877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.025904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.025988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.026908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.026984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.027094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.027202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.027322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.027464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.027602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.027744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.027865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.027891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.028083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.028116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.028207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.028232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.028354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.028383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.028499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.028526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.028670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.028699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.028797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.028822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.028961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.028987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.029076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.029108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.029235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.029262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.029349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.029372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.525 qpair failed and we were unable to recover it. 00:27:21.525 [2024-12-10 22:58:29.029459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.525 [2024-12-10 22:58:29.029483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.029581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.029606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.029694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.029718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.029827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.029855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.029967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.029991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.030107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.030136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.030253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.030279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.030358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.030383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.030467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.030493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.030609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.030649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.030770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.030799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.030892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.030923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.031956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.031979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.032087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.032203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.032314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.032486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.032629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.032780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.032886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.032992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.033135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.033280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.033385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.033491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.033605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.033743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.526 [2024-12-10 22:58:29.033883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.526 [2024-12-10 22:58:29.033909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.526 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.034924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.034950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.035968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.035995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.036136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.036280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.036418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.036528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.036652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.036785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.036899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.036986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.037125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.037229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.037346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.037493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.037647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.037762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.037897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.037928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.038045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.038166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.038329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.038486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.038608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.038761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.038873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.527 qpair failed and we were unable to recover it. 00:27:21.527 [2024-12-10 22:58:29.038989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.527 [2024-12-10 22:58:29.039017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.039953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.039978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.040091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.040118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.040232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.040260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.040379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.040409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.040489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.040515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.040639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.040667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.040753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.040780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.040868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.040895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.041949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.041976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.042064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.042091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.042183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.042219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.042366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.042394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.042508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.042535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.042661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.042688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.042771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.042797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.042877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.042902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.043044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.043168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.043285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.043423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.043558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.043676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.528 [2024-12-10 22:58:29.043785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.528 qpair failed and we were unable to recover it. 00:27:21.528 [2024-12-10 22:58:29.043880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.043906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.043996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.044906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.044930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.045966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.045993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.046924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.046948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.047895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.047984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.529 [2024-12-10 22:58:29.048010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.529 qpair failed and we were unable to recover it. 00:27:21.529 [2024-12-10 22:58:29.048129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.048156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.048248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.048274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.048356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.048382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.048492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.048519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.048654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.048683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.048796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.048822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.048905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.048933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.049903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.049985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.050123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.050235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.050382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.050570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.050686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.050828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.050943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.050970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.051976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.051999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.052094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.052213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.052239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.052352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.052384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.530 qpair failed and we were unable to recover it. 00:27:21.530 [2024-12-10 22:58:29.052479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.530 [2024-12-10 22:58:29.052506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.052635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.052663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.052781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.052807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.052891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.052917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.053964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.053988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.054126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.054150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.054265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.054291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.054387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.054416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.054497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.054522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.054623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.054650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.054729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.054754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.054871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.054897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.055931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.055959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.056075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.056103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.056195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.056221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.056306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.531 [2024-12-10 22:58:29.056334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.531 qpair failed and we were unable to recover it. 00:27:21.531 [2024-12-10 22:58:29.056446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.056472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.056568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.056600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.056755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.056781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.056870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.056897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.056986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.057954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.057977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.058959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.058986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.059951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.059990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.060107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.060132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.060255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.060281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.060418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.060443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.060520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.060557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.060647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.060672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.532 [2024-12-10 22:58:29.060762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.532 [2024-12-10 22:58:29.060788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.532 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.060879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.060903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.060988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.061095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.061208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.061362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.061527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.061662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.061803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.061919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.061953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.062039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.062063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.062176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.062201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.062343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.062373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.062454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.062479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.062618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.062646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.062759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.062787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.062902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.062929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.063956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.063982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.064955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.064980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.065064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.065089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.065167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.065192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.533 qpair failed and we were unable to recover it. 00:27:21.533 [2024-12-10 22:58:29.065310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.533 [2024-12-10 22:58:29.065337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.065454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.065491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.065623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.065652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.065748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.065774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.065891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.065919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.066052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.066191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.066307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.066431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.066592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.066764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.066871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.066989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.067956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.067979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.068063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.068092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.068176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.068202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.068307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.068333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.068448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.068479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.068597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.068625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.068748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.068773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.068857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.068891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.069106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.069133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.069221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.069248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.069369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.069396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.069508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.069534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.069641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.069666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.069753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.069777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.069885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.069909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.534 qpair failed and we were unable to recover it. 00:27:21.534 [2024-12-10 22:58:29.070019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.534 [2024-12-10 22:58:29.070044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.070136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.070163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.070277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.070305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.070389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.070420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.070512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.070539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.070667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.070694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.070786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.070818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.070933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.070961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.071878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.071903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.072060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.072229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.072361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.072472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.072618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.072766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.072907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.072989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.073967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.073991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.074106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.535 [2024-12-10 22:58:29.074133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.535 qpair failed and we were unable to recover it. 00:27:21.535 [2024-12-10 22:58:29.074217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.074247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.074333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.074363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.074448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.074474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.074558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.074589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.074681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.074706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.074791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.074816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.074907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.074933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.075905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.536 [2024-12-10 22:58:29.075920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.075938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.536 [2024-12-10 22:58:29.075953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.536 [2024-12-10 22:58:29.075965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.536 [2024-12-10 22:58:29.075975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.536 [2024-12-10 22:58:29.076003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.076110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.076228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.076367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.076488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.076627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.076773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.076894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.076919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.077006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.077032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.077557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:27:21.536 [2024-12-10 22:58:29.077609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:27:21.536 [2024-12-10 22:58:29.077685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.536 [2024-12-10 22:58:29.077717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.536 [2024-12-10 22:58:29.077657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:27:21.536 qpair failed and we were unable to recover it. 00:27:21.536 [2024-12-10 22:58:29.077661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.537 [2024-12-10 22:58:29.077820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.077847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.077930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.077955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.078937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.078964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.079889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.079914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.080924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.080951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.081041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.081068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.081186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.081212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.081327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.081354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.081436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.081463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.081555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.081585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.081675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.537 [2024-12-10 22:58:29.081700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.537 qpair failed and we were unable to recover it. 00:27:21.537 [2024-12-10 22:58:29.081789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.081815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.081895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.081920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.082876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.082988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.083894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.083926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.084918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.084941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.085055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.085082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.085161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.085187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.085279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.085312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.085469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.085495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.085600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.085627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.085703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.085728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.538 [2024-12-10 22:58:29.085818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.538 [2024-12-10 22:58:29.085842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.538 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.085928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.085954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.086901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.086986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.087965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.087988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.088895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.088921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.089069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.089185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.089297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.089404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.089519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.089649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.089772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.089978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.090010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.539 qpair failed and we were unable to recover it. 00:27:21.539 [2024-12-10 22:58:29.090128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.539 [2024-12-10 22:58:29.090153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.090248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.090275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.090354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.090382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.090465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.090498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.090605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.090634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.090726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.090757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.090839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.090865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.090950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.090976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.091959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.091984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.092099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.092229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.092338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.092452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.092620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.092769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.092907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.092990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.093930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.093955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.094068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.094094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.094212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.094241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.094326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.094353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.094426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.094451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.540 [2024-12-10 22:58:29.094535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.540 [2024-12-10 22:58:29.094568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.540 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.094686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.094714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.094795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.094820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.094911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.094936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.095880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.095906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.096897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.096925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.097906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.097932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.098023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.098048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.098139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.098164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.098256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.098285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.098372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.098398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.098487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.098515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.098601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.541 [2024-12-10 22:58:29.098627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.541 qpair failed and we were unable to recover it. 00:27:21.541 [2024-12-10 22:58:29.098721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.098751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.098838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.098866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.098979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.099885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.099914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.100966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.100993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.101099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.101221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.101402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.101553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.101678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.101781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.101891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.101987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.102943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.102970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.103060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.542 [2024-12-10 22:58:29.103086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.542 qpair failed and we were unable to recover it. 00:27:21.542 [2024-12-10 22:58:29.103167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.103192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.103285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.103311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.103393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.103418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.103506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.103539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.103649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.103676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.103763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.103791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.103874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.103900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.103980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.104900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.104977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.105942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.105968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.106925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.106951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.107033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.107058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.107137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.107162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.107262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.107302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.107399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.107427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.107511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.107538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.107667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.107694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.543 [2024-12-10 22:58:29.107780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.543 [2024-12-10 22:58:29.107806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.543 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.107896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.107922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.108922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.108948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.109913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.109994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.110947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.110973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.111922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.111949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.112036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.112063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.112151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.112175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.112259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.112284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.112361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.112386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.544 [2024-12-10 22:58:29.112494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.544 qpair failed and we were unable to recover it. 00:27:21.544 [2024-12-10 22:58:29.112591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.112617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.112726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.112754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.112837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.112864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.113972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.113998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.114970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.114995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.115080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.115105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.115220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.115249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.115347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.115374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.115501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.115554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.115654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.115683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.115766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.115792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.115884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.115910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.116908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.116933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.117026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.117051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.117132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.545 [2024-12-10 22:58:29.117156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.545 qpair failed and we were unable to recover it. 00:27:21.545 [2024-12-10 22:58:29.117239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.117264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.117345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.117370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.117457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.117486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.117583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.117612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.117711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.117737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.117858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.117887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.117982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.118106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.118251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.118361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.118500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.118624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.118751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.118876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.118907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.119893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.119920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.120901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.120926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.121015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.546 [2024-12-10 22:58:29.121045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.546 qpair failed and we were unable to recover it. 00:27:21.546 [2024-12-10 22:58:29.121141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.121169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.121258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.121290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.121406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.121433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.121518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.121550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.121636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.121663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.121746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.121774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.121855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.121881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.121977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.122891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.122983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.123962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.123989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.547 qpair failed and we were unable to recover it. 00:27:21.547 [2024-12-10 22:58:29.124107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.547 [2024-12-10 22:58:29.124134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.124218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.124246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.124341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.124367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.124464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.124606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.124635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.124719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.124746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.124852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.124879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.124969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.124997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.125932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.125957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.126972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.126998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.127913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.127940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.128026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.128052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.128131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.128157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.548 [2024-12-10 22:58:29.128263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.548 [2024-12-10 22:58:29.128290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.548 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.128381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.128411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.128508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.128535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.128658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.128683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.128774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.128799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.128881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.128905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.128986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.129971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.129997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.130950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.130977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.131089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.131220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.131370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.131482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.131603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.131712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.131863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.131978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.132004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.132103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.132130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.132215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.132240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.132347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.132373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.132454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.132478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.132571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.132601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.549 qpair failed and we were unable to recover it. 00:27:21.549 [2024-12-10 22:58:29.132690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.549 [2024-12-10 22:58:29.132719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.132803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.132830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.132908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.132934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.133872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.133988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.134913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.134939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.135910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.135936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.136054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.136170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.136269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.136382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.136485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.136601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.550 [2024-12-10 22:58:29.136712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.550 qpair failed and we were unable to recover it. 00:27:21.550 [2024-12-10 22:58:29.136798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.136824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.136906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.136930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.137840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.137879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.138953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.138979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.139899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.139923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.140007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.140035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.551 [2024-12-10 22:58:29.140160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.551 [2024-12-10 22:58:29.140187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.551 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.140269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.140295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.140394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.140421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.140500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.140527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.140644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.140672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.140779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.140806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.140891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.140919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.141886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.141973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.142933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.142961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.143873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.143900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.144001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.144027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.144138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.144164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.144258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.552 [2024-12-10 22:58:29.144287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.552 qpair failed and we were unable to recover it. 00:27:21.552 [2024-12-10 22:58:29.144382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.144409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.144511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.144559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.144655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.144682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.144770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.144796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.144883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.144910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.144995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.145901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.145986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.146904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.146986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.147965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.147996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.148092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.148118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.148206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.148238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.148355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.148381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.148461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.148487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.553 [2024-12-10 22:58:29.148588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.553 [2024-12-10 22:58:29.148616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.553 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.148705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.148730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.148857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.148895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.148992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.149896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.149991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.150100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.150237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.150353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.150470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.150604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.150760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.150892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.150920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.151889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.151984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.152126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.152372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.152486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.152621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.152853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.152971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.554 [2024-12-10 22:58:29.152996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.554 qpair failed and we were unable to recover it. 00:27:21.554 [2024-12-10 22:58:29.153077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.153198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.153314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.153431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.153551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.153656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.153764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.153880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.153912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.154971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.154997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.155908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.155980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.156898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.555 [2024-12-10 22:58:29.156977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.555 [2024-12-10 22:58:29.157007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.555 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.157127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.157247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.157384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.157528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.157636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.157777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.157896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.157988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.158147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.158279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.158401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.158514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.158665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.158782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.158901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.158928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.159019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.159047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.159162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.159189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.159277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.159303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.159501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.159530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.159633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.159673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.159772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.159799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.159910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.159934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.160898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.556 [2024-12-10 22:58:29.160927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.556 qpair failed and we were unable to recover it. 00:27:21.556 [2024-12-10 22:58:29.161046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.161161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.161275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.161384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.161508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.161657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.161773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.161901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.161928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.162966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.162991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.163092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.163221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.163359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.163506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.163640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.163749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.163893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.163996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.164894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.164982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.165006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.557 [2024-12-10 22:58:29.165085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.557 [2024-12-10 22:58:29.165109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.557 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.165188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.165212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.165288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.165313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.165416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.165440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.165585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.165615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.165708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.165733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.165828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.165857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.165972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.165998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.166903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.166928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.167952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.167979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.168904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.168929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.169012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.169038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.169130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.169157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.169244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.558 [2024-12-10 22:58:29.169273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.558 qpair failed and we were unable to recover it. 00:27:21.558 [2024-12-10 22:58:29.169398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.169426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.169517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.169543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.169641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.169667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.169746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.169772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.169856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.169882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.169963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.169989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.170893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.170976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.171953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.171980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.172937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.172963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.173080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.173108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.173189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.173216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.173311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.173350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.559 qpair failed and we were unable to recover it. 00:27:21.559 [2024-12-10 22:58:29.173446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.559 [2024-12-10 22:58:29.173472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.173591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.173618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.173701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.173726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.173816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.173842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.173923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.173949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.174915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.174992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.175944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.175969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.176902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.176927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.177002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.177027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.177108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.177134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.177214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.177239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.177316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.177341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.177435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.177465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.177564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.177603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.560 qpair failed and we were unable to recover it. 00:27:21.560 [2024-12-10 22:58:29.177697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.560 [2024-12-10 22:58:29.177725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.177812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.177839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.177922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.177949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.178913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.178940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.179964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.179991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.180919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.180944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.181055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.181167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.181303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.181415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.181528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.181643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.561 [2024-12-10 22:58:29.181753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.561 qpair failed and we were unable to recover it. 00:27:21.561 [2024-12-10 22:58:29.181834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.181861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.181941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.181966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.182920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.182946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.183060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.183168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.183272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.183415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.183540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.183776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.183914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.183996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.184920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.184946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.185888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.185914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.186033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.186060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.186254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.186287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.186367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.186392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.186504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.562 [2024-12-10 22:58:29.186530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.562 qpair failed and we were unable to recover it. 00:27:21.562 [2024-12-10 22:58:29.186615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.186641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.186728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.186754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.186838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.186863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.186942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.186969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.187908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.187933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.188888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.188913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.189904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.189988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.190902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.563 [2024-12-10 22:58:29.190928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.563 qpair failed and we were unable to recover it. 00:27:21.563 [2024-12-10 22:58:29.191013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.191941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.191968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.192888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.192999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.193954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.193985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.194907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.194992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.195020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.195107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.195136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.195217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.195243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.195357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.195384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.195467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.564 [2024-12-10 22:58:29.195493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.564 qpair failed and we were unable to recover it. 00:27:21.564 [2024-12-10 22:58:29.195585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.195612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.195703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.195729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.195811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.195837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.195921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.195946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.196958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.196984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.197076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.197103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.197185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.197210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.197321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.197349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.197424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.197449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.197570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.197602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.197684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.197710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.565 [2024-12-10 22:58:29.197787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.565 [2024-12-10 22:58:29.197813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.565 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.197923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.197950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.198960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.198985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.199925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.199950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.841 qpair failed and we were unable to recover it. 00:27:21.841 [2024-12-10 22:58:29.200938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.841 [2024-12-10 22:58:29.200963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.201909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.201937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.842 [2024-12-10 22:58:29.202052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.202221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.202359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.202463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.202578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.842 [2024-12-10 22:58:29.202688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.202799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.202912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.202939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.842 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.203040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.203156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.842 [2024-12-10 22:58:29.203268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.203413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.203517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.203629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.203806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.203916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.203940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.204892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.204978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.205004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.205092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.205130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.205221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.205251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.205342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.205370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.842 [2024-12-10 22:58:29.205446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.842 [2024-12-10 22:58:29.205472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.842 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.205596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.205625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.205712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.205737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.205817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.205844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.205954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.205980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.206892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.206918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.207912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.207938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.208865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.208979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.209868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.209990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.210016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.843 [2024-12-10 22:58:29.210094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.843 [2024-12-10 22:58:29.210119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.843 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.210192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.210216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.210301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.210328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.210443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.210470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.210554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.210581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.210668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.210694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.210774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.210800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.210880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.210908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.211910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.211935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.212895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.212974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.213902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.213995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.214034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.214124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.214151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.214255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.214295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.214381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.214412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.214522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.214552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.214633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.844 [2024-12-10 22:58:29.214658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.844 qpair failed and we were unable to recover it. 00:27:21.844 [2024-12-10 22:58:29.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.214763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.214843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.214867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.214979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.215895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.215920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.216909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.216993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.217930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.217955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.218032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.218059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.218144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.218172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.218253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.218281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.218363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.218389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.218466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.845 [2024-12-10 22:58:29.218491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.845 qpair failed and we were unable to recover it. 00:27:21.845 [2024-12-10 22:58:29.218643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.218669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.218747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.218774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.218865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.218890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.218970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.218996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.219893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.219979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.220841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.220977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.221897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.221922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.222006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.222036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.222148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.222175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.222270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.222315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.222404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.222432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.222540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.846 [2024-12-10 22:58:29.222573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.846 qpair failed and we were unable to recover it. 00:27:21.846 [2024-12-10 22:58:29.222655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.222681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.222792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.222819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.222901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.222927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.223863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.223993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.224912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.224988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.225914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.225940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.226055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.226083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.226161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.226187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.226271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.226297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.226375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.226401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.226490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.226518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.226620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.847 [2024-12-10 22:58:29.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.847 qpair failed and we were unable to recover it. 00:27:21.847 [2024-12-10 22:58:29.226729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.226754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.226875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.226902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.226986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.227841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.227954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.848 [2024-12-10 22:58:29.227980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.228123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.228242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:21.848 [2024-12-10 22:58:29.228354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.228468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.228583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.848 [2024-12-10 22:58:29.228689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.848 [2024-12-10 22:58:29.228825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.228935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.228960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.229867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.229892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.230065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.230176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.230201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.230276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.230302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.230383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.230409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.230520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.230556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.848 [2024-12-10 22:58:29.230645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.848 [2024-12-10 22:58:29.230672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.848 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.230801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.230839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.230927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.230954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.231921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.231999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.232931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.232957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.233903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.233928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.234043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.234177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.234319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.234432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.234573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.234697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.849 [2024-12-10 22:58:29.234803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.849 qpair failed and we were unable to recover it. 00:27:21.849 [2024-12-10 22:58:29.234912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.234944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.235961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.235988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.236900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.236977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.237924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.237950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.238057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.238087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.238169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.238195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.238326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.238366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.238484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.850 [2024-12-10 22:58:29.238512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-12-10 22:58:29.238606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.238634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.238720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.238746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.238822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.238848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.238927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.238953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.239964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.239989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.240099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.240125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.240227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.240253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.240329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.240357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.240467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.240492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.240637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.240676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.240764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.240791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.240870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.240896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.241913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.241938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-12-10 22:58:29.242935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.851 [2024-12-10 22:58:29.242960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.243961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.243986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.244952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.244977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.245894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.245986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.246895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.246920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-12-10 22:58:29.247009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.852 [2024-12-10 22:58:29.247034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.247151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.247178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.247257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.247284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.247378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.247417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.247509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.247537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.247648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.247675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.247758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.247785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.247897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.247923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.248905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.248932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.249867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.249893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.250843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.250869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.251009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.251035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.251118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.251145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.251225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-12-10 22:58:29.251261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.853 qpair failed and we were unable to recover it. 00:27:21.853 [2024-12-10 22:58:29.251388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.251427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.251517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.251551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.251677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.251703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.251794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.251819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.251912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.251939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08d8000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.252940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.252964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.253928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.253953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.254061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.254087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.254210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.254248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.254344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.254372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.254461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.254488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.254598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.254626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.254739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.254771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.254860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.254886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.255904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.255930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.854 qpair failed and we were unable to recover it. 00:27:21.854 [2024-12-10 22:58:29.256080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-12-10 22:58:29.256105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.256214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.256241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.256354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.256381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.256459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.256485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.256616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.256643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.256744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.256770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.256858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.256884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.256968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.256994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.257143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.257170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.257264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.257291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.257384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.257408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.257490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.257516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.257636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.257662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.257769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.257795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.257907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.257931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.258885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.258912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.259925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-12-10 22:58:29.259950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.855 qpair failed and we were unable to recover it. 00:27:21.855 [2024-12-10 22:58:29.260055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.260171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.260284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.260414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.260541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.260715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.260826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.260926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.260953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.261886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.261912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.262886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.262994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.263876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.263991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.264018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.264136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.264164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.264252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.264279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.856 [2024-12-10 22:58:29.264367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.856 [2024-12-10 22:58:29.264394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.856 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.264483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.264509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.264620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.264647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.264765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.264790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.264866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.264891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.264987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.265130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.265275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.265391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.265506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.265649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.265769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.265886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.265912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.266941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.266968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.267957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.267982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.268070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.268098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.268189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.268218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.268308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.268334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.268441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.268467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.268576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.268611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.857 qpair failed and we were unable to recover it. 00:27:21.857 [2024-12-10 22:58:29.268711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.857 [2024-12-10 22:58:29.268737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.268832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.268859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.269915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.269942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.270896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.270921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.271903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.271928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.272025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.272051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.272133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.272160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.272268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.272308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 Malloc0 00:27:21.858 [2024-12-10 22:58:29.272439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.272467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.272598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.272626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 [2024-12-10 22:58:29.272720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.272747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.858 [2024-12-10 22:58:29.272835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.858 [2024-12-10 22:58:29.272861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.858 qpair failed and we were unable to recover it. 00:27:21.858 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:21.858 [2024-12-10 22:58:29.272956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.272983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.273862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.273976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.274921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.274995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.275900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.275980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.276006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.276046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.859 [2024-12-10 22:58:29.276091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.276117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.276221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.276247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.276358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.276382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.276461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.276486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.276566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.276592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.859 [2024-12-10 22:58:29.276685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.859 [2024-12-10 22:58:29.276711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.859 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.276798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.276825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.276920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.276947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.277897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.277922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.278947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.278975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.279085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.279208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.279323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.279468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.279583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.279724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.279862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.279991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.280019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.280107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.280134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.280213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.280239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.280354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.860 [2024-12-10 22:58:29.280381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.860 qpair failed and we were unable to recover it. 00:27:21.860 [2024-12-10 22:58:29.280469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.280496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.280590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.280618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.280711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.280736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.280822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.280848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.280987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.281949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.281976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.282930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.282956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.283901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.283926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.284014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.284040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.284120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.284145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 [2024-12-10 22:58:29.284227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.284251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.861 [2024-12-10 22:58:29.284340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.284364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.861 qpair failed and we were unable to recover it. 00:27:21.861 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.861 [2024-12-10 22:58:29.284446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.861 [2024-12-10 22:58:29.284473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.862 [2024-12-10 22:58:29.284572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.284598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.862 [2024-12-10 22:58:29.284692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.284717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.284800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.284825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.284902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.284927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.285870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.285896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.286881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.286908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.287962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.287987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.288081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.288109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.288194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.288221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.288309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.288335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.288449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.862 [2024-12-10 22:58:29.288475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.862 qpair failed and we were unable to recover it. 00:27:21.862 [2024-12-10 22:58:29.288564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.288592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.288683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.288708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.288797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.288824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.288904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.288934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.289863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.289985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.290973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.290999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.291924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.291950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.292056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.292082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.292164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.292192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.292279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.863 [2024-12-10 22:58:29.292304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 [2024-12-10 22:58:29.292386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.292413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.863 [2024-12-10 22:58:29.292496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.863 [2024-12-10 22:58:29.292521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.863 qpair failed and we were unable to recover it. 00:27:21.863 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.864 [2024-12-10 22:58:29.292614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.292639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.864 [2024-12-10 22:58:29.292730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.292755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.292842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.292866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.292948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.292973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.293962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.294967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.294992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.295069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.295101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.295209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.295235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.295321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.295348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.295493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.295519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.295619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.295647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.295734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.295761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.295877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.295904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.296042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.296156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.296281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.296481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.296604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.296717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.864 [2024-12-10 22:58:29.296832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.864 qpair failed and we were unable to recover it. 00:27:21.864 [2024-12-10 22:58:29.296928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.296955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.297883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.297909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.298945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.298971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.299930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.299958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.300047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.300073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.300203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.300234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.865 [2024-12-10 22:58:29.300355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 [2024-12-10 22:58:29.300381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.300471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.865 [2024-12-10 22:58:29.300498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.865 qpair failed and we were unable to recover it. 00:27:21.865 [2024-12-10 22:58:29.300619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.865 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.865 [2024-12-10 22:58:29.300647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.300729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.866 [2024-12-10 22:58:29.300755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.300874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.300899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.300992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.301948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.301973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.302966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.302992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08e4000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.303884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.303908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.304027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.304052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6fa0 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.304162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.866 [2024-12-10 22:58:29.304189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f08dc000b90 with addr=10.0.0.2, port=4420 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 [2024-12-10 22:58:29.304332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.866 [2024-12-10 22:58:29.306882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.866 [2024-12-10 22:58:29.307039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.866 [2024-12-10 22:58:29.307069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.866 [2024-12-10 22:58:29.307085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.866 [2024-12-10 22:58:29.307099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.866 [2024-12-10 22:58:29.307135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.866 qpair failed and we were unable to recover it. 00:27:21.866 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.866 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:21.867 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.867 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.867 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.867 22:58:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 179920 00:27:21.867 [2024-12-10 22:58:29.316766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.316864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.316890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.316904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.316919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.316950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.326741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.326826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.326852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.326867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.326880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.326910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.336755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.336868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.336895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.336909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.336924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.336956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.346670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.346795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.346821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.346836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.346857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.346887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.356700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.356805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.356831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.356845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.356858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.356900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.366726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.366819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.366845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.366859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.366872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.366903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.376749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.376842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.376868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.376882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.376895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.376927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.386820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.386906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.386934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.386949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.386960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.386990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.396838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.396928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.396956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.396971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.396983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.397014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.406865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.406986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.407017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.407033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.407046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.407079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.416868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.416964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.416990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.417006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.417019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.867 [2024-12-10 22:58:29.417050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.867 qpair failed and we were unable to recover it. 00:27:21.867 [2024-12-10 22:58:29.427020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.867 [2024-12-10 22:58:29.427115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.867 [2024-12-10 22:58:29.427142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.867 [2024-12-10 22:58:29.427157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.867 [2024-12-10 22:58:29.427170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.427201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.436922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.437014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.437040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.437060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.437074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.437105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.446971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.447091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.447118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.447133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.447145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.447180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.456965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.457054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.457081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.457095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.457108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.457139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.467049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.467148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.467175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.467189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.467202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.467233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.477004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.477094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.477121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.477136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.477149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.477186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.487041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.487167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.487197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.487214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.487227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.487258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.497142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.497236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.497264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.497285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.497298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.497330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.507125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.507216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.507242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.507257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.507270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.507300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.517202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.517284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.517311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.517325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.517338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.517369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.527135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.527222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.527247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.527262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.527274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.527304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.537295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.537398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.537424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.537439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.537451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.537482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.868 qpair failed and we were unable to recover it. 00:27:21.868 [2024-12-10 22:58:29.547217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.868 [2024-12-10 22:58:29.547301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.868 [2024-12-10 22:58:29.547327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.868 [2024-12-10 22:58:29.547342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.868 [2024-12-10 22:58:29.547355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:21.868 [2024-12-10 22:58:29.547397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.869 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.557219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.557302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.557328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.557342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.557355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.557388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.567237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.567334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.567366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.567381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.567394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.567425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.577343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.577458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.577484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.577498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.577511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.577550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.587312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.587408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.587434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.587448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.587461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.587516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.597368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.597455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.597480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.597494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.597507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.597538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.607432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.607522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.607556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.607573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.607586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.607635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.617404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.617496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.617521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.617535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.617555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.617588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.627461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.627552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.129 [2024-12-10 22:58:29.627578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.129 [2024-12-10 22:58:29.627595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.129 [2024-12-10 22:58:29.627609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.129 [2024-12-10 22:58:29.627639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.129 qpair failed and we were unable to recover it. 00:27:22.129 [2024-12-10 22:58:29.637470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.129 [2024-12-10 22:58:29.637584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.637614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.637630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.637644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.637675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.647508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.647633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.647659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.647674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.647686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.647717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.657571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.657685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.657711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.657726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.657739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.657768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.667560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.667649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.667675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.667689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.667702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.667733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.677581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.677673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.677697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.677712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.677725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.677754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.687689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.687772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.687797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.687812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.687825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.687855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.697648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.697737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.697771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.697786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.697799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.697829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.707694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.707796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.707822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.707836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.707849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.707879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.717745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.717833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.717859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.717873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.717886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.717929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.727844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.727987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.728012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.728026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.728039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.728069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.737797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.737921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.737947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.737961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.737980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.738011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.747804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.747894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.747920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.130 [2024-12-10 22:58:29.747934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.130 [2024-12-10 22:58:29.747947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.130 [2024-12-10 22:58:29.747977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.130 qpair failed and we were unable to recover it. 00:27:22.130 [2024-12-10 22:58:29.757924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.130 [2024-12-10 22:58:29.758055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.130 [2024-12-10 22:58:29.758080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.758095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.758107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.758138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.767839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.767922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.767948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.767962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.767976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.768006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.777947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.778067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.778093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.778107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.778119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.778149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.787911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.787995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.788020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.788035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.788048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.788078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.797956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.798038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.798064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.798078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.798091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.798121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.807953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.808038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.808063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.808077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.808090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.808119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.818086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.818175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.818200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.818214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.818227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.818256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.828083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.828177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.828208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.828224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.828237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.828268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.838040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.838122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.838147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.838162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.838175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.838206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.131 [2024-12-10 22:58:29.848059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.131 [2024-12-10 22:58:29.848141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.131 [2024-12-10 22:58:29.848166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.131 [2024-12-10 22:58:29.848180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.131 [2024-12-10 22:58:29.848193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.131 [2024-12-10 22:58:29.848223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.131 qpair failed and we were unable to recover it. 00:27:22.391 [2024-12-10 22:58:29.858127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.391 [2024-12-10 22:58:29.858230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.858256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.858270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.858283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.858313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.868165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.868260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.868289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.868311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.868325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.868356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.878181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.878265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.878290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.878305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.878317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.878347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.888276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.888359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.888384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.888398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.888410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.888439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.898245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.898344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.898369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.898383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.898396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.898427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.908272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.908367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.908392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.908407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.908420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.908451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.918309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.918402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.918427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.918441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.918454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.918484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.928342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.928424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.928449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.928463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.928475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.928505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.938355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.938448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.938472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.938486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.938498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.938527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.948392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.948475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.948500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.948515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.948527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.948566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.958538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.392 [2024-12-10 22:58:29.958632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.392 [2024-12-10 22:58:29.958658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.392 [2024-12-10 22:58:29.958672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.392 [2024-12-10 22:58:29.958685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.392 [2024-12-10 22:58:29.958715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.392 qpair failed and we were unable to recover it. 00:27:22.392 [2024-12-10 22:58:29.968456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:29.968536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:29.968568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:29.968583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:29.968596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.393 [2024-12-10 22:58:29.968625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:29.978578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:29.978718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:29.978744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:29.978759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:29.978772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.393 [2024-12-10 22:58:29.978802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:29.988485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:29.988578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:29.988603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:29.988617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:29.988629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.393 [2024-12-10 22:58:29.988660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:29.998507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:29.998600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:29.998626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:29.998646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:29.998660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.393 [2024-12-10 22:58:29.998690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.008653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.008752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.008805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.008821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:30.008835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08e4000b90 00:27:22.393 [2024-12-10 22:58:30.008888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.018645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.018738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.018770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.018787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:30.018801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08d8000b90 00:27:22.393 [2024-12-10 22:58:30.018833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.028652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.028762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.028794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.028809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:30.028821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.393 [2024-12-10 22:58:30.028853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.038651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.038739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.038766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.038780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:30.038793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.393 [2024-12-10 22:58:30.038830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.048684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.048771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.048800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.048815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:30.048827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.393 [2024-12-10 22:58:30.048856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.058797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.058885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.058912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.058926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:30.058940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.393 [2024-12-10 22:58:30.058968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.068741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.068828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.068854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.068868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.393 [2024-12-10 22:58:30.068880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.393 [2024-12-10 22:58:30.068910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.393 qpair failed and we were unable to recover it. 00:27:22.393 [2024-12-10 22:58:30.078748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.393 [2024-12-10 22:58:30.078853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.393 [2024-12-10 22:58:30.078879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.393 [2024-12-10 22:58:30.078893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.394 [2024-12-10 22:58:30.078906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.394 [2024-12-10 22:58:30.078934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.394 qpair failed and we were unable to recover it. 00:27:22.394 [2024-12-10 22:58:30.088954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.394 [2024-12-10 22:58:30.089059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.394 [2024-12-10 22:58:30.089085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.394 [2024-12-10 22:58:30.089100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.394 [2024-12-10 22:58:30.089113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.394 [2024-12-10 22:58:30.089141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.394 qpair failed and we were unable to recover it. 00:27:22.394 [2024-12-10 22:58:30.098902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.394 [2024-12-10 22:58:30.099023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.394 [2024-12-10 22:58:30.099048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.394 [2024-12-10 22:58:30.099062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.394 [2024-12-10 22:58:30.099075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.394 [2024-12-10 22:58:30.099103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.394 qpair failed and we were unable to recover it. 00:27:22.394 [2024-12-10 22:58:30.108891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.394 [2024-12-10 22:58:30.108979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.394 [2024-12-10 22:58:30.109005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.394 [2024-12-10 22:58:30.109020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.394 [2024-12-10 22:58:30.109032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.394 [2024-12-10 22:58:30.109061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.394 qpair failed and we were unable to recover it. 00:27:22.394 [2024-12-10 22:58:30.118925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.394 [2024-12-10 22:58:30.119015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.394 [2024-12-10 22:58:30.119042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.394 [2024-12-10 22:58:30.119056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.394 [2024-12-10 22:58:30.119070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.394 [2024-12-10 22:58:30.119098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.394 qpair failed and we were unable to recover it. 00:27:22.653 [2024-12-10 22:58:30.128991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.653 [2024-12-10 22:58:30.129074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.653 [2024-12-10 22:58:30.129100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.653 [2024-12-10 22:58:30.129121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.653 [2024-12-10 22:58:30.129135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.653 [2024-12-10 22:58:30.129164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.653 qpair failed and we were unable to recover it. 00:27:22.653 [2024-12-10 22:58:30.138946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.653 [2024-12-10 22:58:30.139037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.139062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.139076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.139089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.139117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.149075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.149162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.149187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.149201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.149214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.149242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.158952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.159036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.159061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.159075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.159088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.159116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.169033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.169161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.169187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.169201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.169214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.169248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.179065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.179152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.179177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.179191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.179203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.179231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.189179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.189264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.189289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.189303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.189315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.189344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.199110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.199191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.199216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.199229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.199243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.199271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.209096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.209186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.209211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.209225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.209237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.209266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.219223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.219321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.219346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.219359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.219372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.219401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.229159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.229244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.229269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.229282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.229295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.229323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.239184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.239314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.239339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.239353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.239365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.239393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.249312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.654 [2024-12-10 22:58:30.249397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.654 [2024-12-10 22:58:30.249422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.654 [2024-12-10 22:58:30.249436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.654 [2024-12-10 22:58:30.249449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.654 [2024-12-10 22:58:30.249477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.654 qpair failed and we were unable to recover it. 00:27:22.654 [2024-12-10 22:58:30.259273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.259361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.259385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.259406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.259420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.259448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.269276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.269359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.269384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.269398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.269410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.269438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.279334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.279419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.279444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.279458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.279471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.279499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.289390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.289483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.289509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.289523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.289536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.289573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.299409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.299525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.299560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.299576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.299589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.299624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.309429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.309538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.309575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.309591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.309603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.309633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.319425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.319516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.319542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.319564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.319577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.319606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.329493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.329620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.329646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.329661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.329673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.329702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.339543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.339688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.339713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.339727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.339739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.339768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.349505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.349621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.349646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.349661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.349673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.349701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.359534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.359626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.359651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.359666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.359679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.359707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.369551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.655 [2024-12-10 22:58:30.369637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.655 [2024-12-10 22:58:30.369662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.655 [2024-12-10 22:58:30.369676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.655 [2024-12-10 22:58:30.369688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.655 [2024-12-10 22:58:30.369717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.655 qpair failed and we were unable to recover it. 00:27:22.655 [2024-12-10 22:58:30.379612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.656 [2024-12-10 22:58:30.379701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.656 [2024-12-10 22:58:30.379727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.656 [2024-12-10 22:58:30.379742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.656 [2024-12-10 22:58:30.379754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.656 [2024-12-10 22:58:30.379783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.656 qpair failed and we were unable to recover it. 00:27:22.915 [2024-12-10 22:58:30.389604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.915 [2024-12-10 22:58:30.389698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.389725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.389746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.389760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.389789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.399690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.399801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.399827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.399841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.399853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.399882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.409664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.409763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.409788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.409801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.409814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.409844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.419737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.419827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.419852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.419866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.419879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.419907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.429733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.429820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.429845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.429859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.429872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.429907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.439760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.439840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.439864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.439878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.439890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.439919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.449776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.449861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.449886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.449900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.449913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.449941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.459837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.459927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.459952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.459967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.459979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.460008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.469846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.469931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.469956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.469970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.469983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.470011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.479896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.479986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.480011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.480026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.480038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.480066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.489906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.489986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.490010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.490024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.490037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.490066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.499963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.916 [2024-12-10 22:58:30.500047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.916 [2024-12-10 22:58:30.500072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.916 [2024-12-10 22:58:30.500086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.916 [2024-12-10 22:58:30.500099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.916 [2024-12-10 22:58:30.500128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.916 qpair failed and we were unable to recover it. 00:27:22.916 [2024-12-10 22:58:30.510021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.510143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.510172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.510188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.510201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.510229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.520027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.520112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.520138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.520159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.520172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.520202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.530042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.530128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.530153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.530167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.530179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.530207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.540082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.540171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.540198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.540213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.540226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.540255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.550106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.550200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.550225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.550240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.550253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.550282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.560137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.560220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.560247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.560267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.560281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.560317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.570169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.570258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.570284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.570299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.570312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.570341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.580302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.580391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.580416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.580430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.580442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.580471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.590221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.590313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.590341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.590357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.590371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.590400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.600208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.600314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.600340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.600355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.600368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.600396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.610276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.610402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.610428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.610442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.610454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.610483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.620363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.620447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.917 [2024-12-10 22:58:30.620472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.917 [2024-12-10 22:58:30.620487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.917 [2024-12-10 22:58:30.620500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.917 [2024-12-10 22:58:30.620528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.917 qpair failed and we were unable to recover it. 00:27:22.917 [2024-12-10 22:58:30.630408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.917 [2024-12-10 22:58:30.630489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.918 [2024-12-10 22:58:30.630515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.918 [2024-12-10 22:58:30.630529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.918 [2024-12-10 22:58:30.630559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.918 [2024-12-10 22:58:30.630588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.918 qpair failed and we were unable to recover it. 00:27:22.918 [2024-12-10 22:58:30.640353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.918 [2024-12-10 22:58:30.640430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.918 [2024-12-10 22:58:30.640456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.918 [2024-12-10 22:58:30.640470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.918 [2024-12-10 22:58:30.640483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:22.918 [2024-12-10 22:58:30.640512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.918 qpair failed and we were unable to recover it. 00:27:23.179 [2024-12-10 22:58:30.650360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.179 [2024-12-10 22:58:30.650452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.179 [2024-12-10 22:58:30.650495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.179 [2024-12-10 22:58:30.650510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.179 [2024-12-10 22:58:30.650523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.179 [2024-12-10 22:58:30.650563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.179 qpair failed and we were unable to recover it. 00:27:23.179 [2024-12-10 22:58:30.660441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.179 [2024-12-10 22:58:30.660526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.179 [2024-12-10 22:58:30.660559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.179 [2024-12-10 22:58:30.660575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.179 [2024-12-10 22:58:30.660588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.660616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.670406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.670482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.670508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.670521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.670534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.670571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.680450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.680534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.680566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.680581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.680594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.680622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.690521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.690629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.690655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.690669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.690681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.690716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.700503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.700615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.700641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.700655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.700668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.700696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.710533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.710640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.710669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.710685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.710698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.710727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.720598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.720681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.720707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.720721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.720734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.720763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.730596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.730699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.730725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.730739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.730751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.730780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.740668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.740760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.740786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.740800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.740813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.740841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.750672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.750758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.750783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.750797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.750810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.750838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.760682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.760793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.760818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.760833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.760845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.760873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.770696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.770778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.770803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.770817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.770830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.180 [2024-12-10 22:58:30.770858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.180 qpair failed and we were unable to recover it. 00:27:23.180 [2024-12-10 22:58:30.780760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.180 [2024-12-10 22:58:30.780896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.180 [2024-12-10 22:58:30.780926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.180 [2024-12-10 22:58:30.780941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.180 [2024-12-10 22:58:30.780954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.780983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.790767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.790852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.790878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.790891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.790904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.790933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.800799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.800923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.800948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.800962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.800974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.801002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.810829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.810953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.810978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.810992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.811005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.811033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.820877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.820965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.820989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.821003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.821016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.821053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.830886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.830970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.830995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.831009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.831022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.831050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.840930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.841047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.841072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.841086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.841099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.841127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.850980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.851098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.851126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.851142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.851155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.851185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.861008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.861097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.861122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.861136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.861149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.861178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.871114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.871242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.871268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.871282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.871295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.871323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.881012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.881097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.881123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.881136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.881149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.881177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.891070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.891196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.891221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.891235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.891247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.181 [2024-12-10 22:58:30.891274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.181 qpair failed and we were unable to recover it. 00:27:23.181 [2024-12-10 22:58:30.901122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.181 [2024-12-10 22:58:30.901209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.181 [2024-12-10 22:58:30.901234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.181 [2024-12-10 22:58:30.901248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.181 [2024-12-10 22:58:30.901261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.182 [2024-12-10 22:58:30.901289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.182 qpair failed and we were unable to recover it. 00:27:23.443 [2024-12-10 22:58:30.911138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.443 [2024-12-10 22:58:30.911256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.443 [2024-12-10 22:58:30.911288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.443 [2024-12-10 22:58:30.911303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.443 [2024-12-10 22:58:30.911316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.443 [2024-12-10 22:58:30.911345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.443 qpair failed and we were unable to recover it. 00:27:23.443 [2024-12-10 22:58:30.921159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.443 [2024-12-10 22:58:30.921292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.443 [2024-12-10 22:58:30.921318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.443 [2024-12-10 22:58:30.921333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.443 [2024-12-10 22:58:30.921347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.443 [2024-12-10 22:58:30.921376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.443 qpair failed and we were unable to recover it. 00:27:23.443 [2024-12-10 22:58:30.931168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.443 [2024-12-10 22:58:30.931252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.443 [2024-12-10 22:58:30.931278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.443 [2024-12-10 22:58:30.931291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.443 [2024-12-10 22:58:30.931304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:30.931333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:30.941199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:30.941285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:30.941309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:30.941322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:30.941334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:30.941362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:30.951217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:30.951318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:30.951344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:30.951358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:30.951370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:30.951405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:30.961230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:30.961312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:30.961337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:30.961351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:30.961364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:30.961392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:30.971352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:30.971433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:30.971459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:30.971473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:30.971487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:30.971515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:30.981343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:30.981435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:30.981460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:30.981474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:30.981486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:30.981516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:30.991354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:30.991485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:30.991514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:30.991530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:30.991543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:30.991583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:31.001363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:31.001452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:31.001477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:31.001491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:31.001503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:31.001534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:31.011477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:31.011570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:31.011596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:31.011610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:31.011623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:31.011652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:31.021439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:31.021533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:31.021569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:31.021584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:31.021597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:31.021626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:31.031470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:31.031562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:31.031588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:31.031603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:31.031615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:31.031646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:31.041497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:31.041594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:31.041626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:31.041647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:31.041661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.444 [2024-12-10 22:58:31.041690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.444 qpair failed and we were unable to recover it. 00:27:23.444 [2024-12-10 22:58:31.051503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.444 [2024-12-10 22:58:31.051593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.444 [2024-12-10 22:58:31.051620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.444 [2024-12-10 22:58:31.051634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.444 [2024-12-10 22:58:31.051646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.051675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.061621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.061750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.061775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.061790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.061802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.061831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.071598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.071727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.071752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.071766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.071778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.071806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.081638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.081734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.081760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.081773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.081786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.081820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.091631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.091716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.091741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.091755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.091767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.091796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.101765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.101854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.101879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.101893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.101905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.101934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.111696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.111782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.111806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.111821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.111833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.111861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.121741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.121821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.121847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.121861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.121873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.121902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.131752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.131835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.131860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.131875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.131887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.131916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.141820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.141945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.141971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.141984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.141997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.142025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.151837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.151921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.151946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.151959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.151972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.152002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.161856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.445 [2024-12-10 22:58:31.161940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.445 [2024-12-10 22:58:31.161964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.445 [2024-12-10 22:58:31.161978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.445 [2024-12-10 22:58:31.161991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.445 [2024-12-10 22:58:31.162019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.445 qpair failed and we were unable to recover it. 00:27:23.445 [2024-12-10 22:58:31.171872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.171996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.172028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.172044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.172056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.172085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.181961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.182054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.182081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.182095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.182108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.182137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.191959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.192051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.192076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.192090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.192102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.192131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.202010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.202108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.202133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.202147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.202159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.202187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.211988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.212109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.212134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.212149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.212167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.212197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.222053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.222147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.222172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.222186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.222199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.222227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.232162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.232286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.232315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.232332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.232345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.232375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.242155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.242237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.242263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.242276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.242289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.708 [2024-12-10 22:58:31.242318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.708 qpair failed and we were unable to recover it. 00:27:23.708 [2024-12-10 22:58:31.252128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.708 [2024-12-10 22:58:31.252240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.708 [2024-12-10 22:58:31.252269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.708 [2024-12-10 22:58:31.252285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.708 [2024-12-10 22:58:31.252298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.252327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.262181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.262273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.262299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.262313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.262326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.262354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.272186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.272294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.272320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.272333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.272346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.272375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.282187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.282269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.282295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.282309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.282321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.282349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.292199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.292312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.292337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.292351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.292364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.292394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.302354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.302443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.302474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.302489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.302501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.302530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.312250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.312335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.312360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.312374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.312387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.312415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.322266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.322376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.322400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.322414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.322427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.322455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.332289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.332367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.332392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.332407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.332419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.332447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.342415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.342504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.342529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.342542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.342569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.342598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.352364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.352448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.352474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.352488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.352501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.352529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.362386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.362498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.362523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.362537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.362558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.362588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.709 [2024-12-10 22:58:31.372405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.709 [2024-12-10 22:58:31.372493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.709 [2024-12-10 22:58:31.372518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.709 [2024-12-10 22:58:31.372532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.709 [2024-12-10 22:58:31.372550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.709 [2024-12-10 22:58:31.372581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.709 qpair failed and we were unable to recover it. 00:27:23.710 [2024-12-10 22:58:31.382473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.710 [2024-12-10 22:58:31.382565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.710 [2024-12-10 22:58:31.382590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.710 [2024-12-10 22:58:31.382603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.710 [2024-12-10 22:58:31.382616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.710 [2024-12-10 22:58:31.382644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.710 qpair failed and we were unable to recover it. 00:27:23.710 [2024-12-10 22:58:31.392573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.710 [2024-12-10 22:58:31.392694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.710 [2024-12-10 22:58:31.392719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.710 [2024-12-10 22:58:31.392733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.710 [2024-12-10 22:58:31.392746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.710 [2024-12-10 22:58:31.392774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.710 qpair failed and we were unable to recover it. 00:27:23.710 [2024-12-10 22:58:31.402522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.710 [2024-12-10 22:58:31.402622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.710 [2024-12-10 22:58:31.402647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.710 [2024-12-10 22:58:31.402661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.710 [2024-12-10 22:58:31.402674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.710 [2024-12-10 22:58:31.402702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.710 qpair failed and we were unable to recover it. 00:27:23.710 [2024-12-10 22:58:31.412488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.710 [2024-12-10 22:58:31.412610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.710 [2024-12-10 22:58:31.412636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.710 [2024-12-10 22:58:31.412650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.710 [2024-12-10 22:58:31.412663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.710 [2024-12-10 22:58:31.412693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.710 qpair failed and we were unable to recover it. 00:27:23.710 [2024-12-10 22:58:31.422591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.710 [2024-12-10 22:58:31.422733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.710 [2024-12-10 22:58:31.422762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.710 [2024-12-10 22:58:31.422777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.710 [2024-12-10 22:58:31.422791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.710 [2024-12-10 22:58:31.422820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.710 qpair failed and we were unable to recover it. 00:27:23.710 [2024-12-10 22:58:31.432653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.710 [2024-12-10 22:58:31.432736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.710 [2024-12-10 22:58:31.432768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.710 [2024-12-10 22:58:31.432783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.710 [2024-12-10 22:58:31.432796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.710 [2024-12-10 22:58:31.432825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.710 qpair failed and we were unable to recover it. 00:27:23.970 [2024-12-10 22:58:31.442646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.970 [2024-12-10 22:58:31.442728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.970 [2024-12-10 22:58:31.442755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.970 [2024-12-10 22:58:31.442769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.970 [2024-12-10 22:58:31.442782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.970 [2024-12-10 22:58:31.442811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-12-10 22:58:31.452630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.970 [2024-12-10 22:58:31.452708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.970 [2024-12-10 22:58:31.452734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.970 [2024-12-10 22:58:31.452748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.970 [2024-12-10 22:58:31.452760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.970 [2024-12-10 22:58:31.452789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-12-10 22:58:31.462724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.970 [2024-12-10 22:58:31.462822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.970 [2024-12-10 22:58:31.462846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.970 [2024-12-10 22:58:31.462860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.970 [2024-12-10 22:58:31.462873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.970 [2024-12-10 22:58:31.462902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-12-10 22:58:31.472713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.970 [2024-12-10 22:58:31.472790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.970 [2024-12-10 22:58:31.472815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.970 [2024-12-10 22:58:31.472829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.970 [2024-12-10 22:58:31.472852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.970 [2024-12-10 22:58:31.472881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-12-10 22:58:31.482728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.970 [2024-12-10 22:58:31.482859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.970 [2024-12-10 22:58:31.482884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.970 [2024-12-10 22:58:31.482898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.970 [2024-12-10 22:58:31.482910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.970 [2024-12-10 22:58:31.482938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-12-10 22:58:31.492765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.970 [2024-12-10 22:58:31.492897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.970 [2024-12-10 22:58:31.492922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.970 [2024-12-10 22:58:31.492935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.970 [2024-12-10 22:58:31.492949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.970 [2024-12-10 22:58:31.492977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-12-10 22:58:31.502797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.970 [2024-12-10 22:58:31.502916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.970 [2024-12-10 22:58:31.502941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.502954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.502967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.502995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.512839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.512925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.512950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.512963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.512977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.513007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.522919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.523011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.523036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.523051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.523063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.523092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.532898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.532992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.533016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.533030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.533043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.533071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.542888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.542978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.543003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.543017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.543030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.543058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.552936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.553022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.553046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.553060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.553073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.553101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.562939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.563021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.563051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.563066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.563079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.563107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.573067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.573150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.573174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.573188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.573201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.573229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.583009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.583095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.583120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.583133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.583146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.583174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.593019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.593104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.593129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.593143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.593156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.593184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.603117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.603201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.603226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.603240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.603258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.603288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.613140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.613224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.613249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.613263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.971 [2024-12-10 22:58:31.613276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.971 [2024-12-10 22:58:31.613303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-12-10 22:58:31.623126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.971 [2024-12-10 22:58:31.623262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.971 [2024-12-10 22:58:31.623287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.971 [2024-12-10 22:58:31.623302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.623315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.623343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-12-10 22:58:31.633149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.972 [2024-12-10 22:58:31.633232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.972 [2024-12-10 22:58:31.633257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.972 [2024-12-10 22:58:31.633271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.633284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.633311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-12-10 22:58:31.643252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.972 [2024-12-10 22:58:31.643332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.972 [2024-12-10 22:58:31.643357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.972 [2024-12-10 22:58:31.643371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.643384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.643413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-12-10 22:58:31.653189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.972 [2024-12-10 22:58:31.653272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.972 [2024-12-10 22:58:31.653297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.972 [2024-12-10 22:58:31.653311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.653324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.653352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-12-10 22:58:31.663235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.972 [2024-12-10 22:58:31.663356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.972 [2024-12-10 22:58:31.663380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.972 [2024-12-10 22:58:31.663395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.663407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.663435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-12-10 22:58:31.673254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.972 [2024-12-10 22:58:31.673339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.972 [2024-12-10 22:58:31.673363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.972 [2024-12-10 22:58:31.673377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.673389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.673419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-12-10 22:58:31.683367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.972 [2024-12-10 22:58:31.683443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.972 [2024-12-10 22:58:31.683469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.972 [2024-12-10 22:58:31.683482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.683495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.683523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-12-10 22:58:31.693332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.972 [2024-12-10 22:58:31.693410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.972 [2024-12-10 22:58:31.693440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.972 [2024-12-10 22:58:31.693455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.972 [2024-12-10 22:58:31.693468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:23.972 [2024-12-10 22:58:31.693497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.972 qpair failed and we were unable to recover it. 00:27:24.232 [2024-12-10 22:58:31.703389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.232 [2024-12-10 22:58:31.703495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.232 [2024-12-10 22:58:31.703523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.232 [2024-12-10 22:58:31.703553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.232 [2024-12-10 22:58:31.703569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.232 [2024-12-10 22:58:31.703599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.232 qpair failed and we were unable to recover it. 00:27:24.232 [2024-12-10 22:58:31.713387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.232 [2024-12-10 22:58:31.713505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.232 [2024-12-10 22:58:31.713531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.232 [2024-12-10 22:58:31.713552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.232 [2024-12-10 22:58:31.713568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.232 [2024-12-10 22:58:31.713597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.232 qpair failed and we were unable to recover it. 00:27:24.232 [2024-12-10 22:58:31.723454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.232 [2024-12-10 22:58:31.723572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.232 [2024-12-10 22:58:31.723597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.232 [2024-12-10 22:58:31.723611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.232 [2024-12-10 22:58:31.723624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.232 [2024-12-10 22:58:31.723653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.232 qpair failed and we were unable to recover it. 00:27:24.232 [2024-12-10 22:58:31.733444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.232 [2024-12-10 22:58:31.733530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.232 [2024-12-10 22:58:31.733563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.232 [2024-12-10 22:58:31.733578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.232 [2024-12-10 22:58:31.733596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.232 [2024-12-10 22:58:31.733626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.232 qpair failed and we were unable to recover it. 00:27:24.232 [2024-12-10 22:58:31.743485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.232 [2024-12-10 22:58:31.743585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.232 [2024-12-10 22:58:31.743611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.232 [2024-12-10 22:58:31.743625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.232 [2024-12-10 22:58:31.743637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.232 [2024-12-10 22:58:31.743667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.232 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.753491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.753594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.753619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.753634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.753647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.753677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.763522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.763625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.763651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.763665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.763678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.763706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.773578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.773666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.773692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.773706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.773719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.773747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.783593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.783733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.783757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.783771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.783784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.783812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.793648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.793775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.793799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.793813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.793825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.793853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.803693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.803806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.803832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.803846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.803858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.803887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.813712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.813811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.813836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.813849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.813862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.813892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.823732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.823823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.823854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.823868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.823881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.823910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.833765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.833894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.833920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.833934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.833946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.833974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.843791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.843923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.843948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.843962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.843975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.844004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.853806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.853910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.853935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.853949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.853962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.853990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.863882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.863966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.863991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.864005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.864024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.864053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.873854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.873940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.873965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.233 [2024-12-10 22:58:31.873979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.233 [2024-12-10 22:58:31.873991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.233 [2024-12-10 22:58:31.874020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.233 qpair failed and we were unable to recover it. 00:27:24.233 [2024-12-10 22:58:31.883908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.233 [2024-12-10 22:58:31.883995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.233 [2024-12-10 22:58:31.884020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.884034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.884046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.884074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.234 [2024-12-10 22:58:31.893925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.234 [2024-12-10 22:58:31.894049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.234 [2024-12-10 22:58:31.894074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.894088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.894099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.894127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.234 [2024-12-10 22:58:31.904040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.234 [2024-12-10 22:58:31.904160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.234 [2024-12-10 22:58:31.904184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.904199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.904211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.904239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.234 [2024-12-10 22:58:31.913957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.234 [2024-12-10 22:58:31.914039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.234 [2024-12-10 22:58:31.914064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.914078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.914091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.914119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.234 [2024-12-10 22:58:31.924035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.234 [2024-12-10 22:58:31.924130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.234 [2024-12-10 22:58:31.924155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.924169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.924182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.924210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.234 [2024-12-10 22:58:31.934007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.234 [2024-12-10 22:58:31.934135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.234 [2024-12-10 22:58:31.934161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.934175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.934189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.934217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.234 [2024-12-10 22:58:31.944127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.234 [2024-12-10 22:58:31.944266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.234 [2024-12-10 22:58:31.944297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.944310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.944322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.944349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.234 [2024-12-10 22:58:31.954111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.234 [2024-12-10 22:58:31.954197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.234 [2024-12-10 22:58:31.954228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.234 [2024-12-10 22:58:31.954243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.234 [2024-12-10 22:58:31.954255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.234 [2024-12-10 22:58:31.954284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.234 qpair failed and we were unable to recover it. 00:27:24.493 [2024-12-10 22:58:31.964132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.493 [2024-12-10 22:58:31.964248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.493 [2024-12-10 22:58:31.964274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.493 [2024-12-10 22:58:31.964289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.493 [2024-12-10 22:58:31.964301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.493 [2024-12-10 22:58:31.964330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-12-10 22:58:31.974126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.493 [2024-12-10 22:58:31.974209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.493 [2024-12-10 22:58:31.974235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.493 [2024-12-10 22:58:31.974249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.493 [2024-12-10 22:58:31.974261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.493 [2024-12-10 22:58:31.974291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-12-10 22:58:31.984153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.493 [2024-12-10 22:58:31.984242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.493 [2024-12-10 22:58:31.984267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.493 [2024-12-10 22:58:31.984280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.493 [2024-12-10 22:58:31.984293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.493 [2024-12-10 22:58:31.984321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-12-10 22:58:31.994168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.493 [2024-12-10 22:58:31.994252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.493 [2024-12-10 22:58:31.994277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.493 [2024-12-10 22:58:31.994291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.493 [2024-12-10 22:58:31.994310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:31.994338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.004241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.004340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.004365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.004380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.004392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.004421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.014247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.014329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.014354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.014368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.014381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.014408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.024278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.024365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.024390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.024403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.024416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.024444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.034314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.034426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.034451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.034465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.034478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.034506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.044331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.044411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.044436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.044450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.044463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.044491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.054328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.054403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.054428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.054442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.054455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.054483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.064372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.064466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.064491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.064506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.064518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.064553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.074440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.074561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.074586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.074601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.074613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.074642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.084452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.084568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.084598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.084613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.084626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.084654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.094525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.094627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.094652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.094666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.094679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.094707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.104508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.104607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.104632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.104646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.104659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.104687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.114557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.114651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.114676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.114690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.114703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.114731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.124644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.124724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.494 [2024-12-10 22:58:32.124748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.494 [2024-12-10 22:58:32.124762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.494 [2024-12-10 22:58:32.124783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.494 [2024-12-10 22:58:32.124814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-12-10 22:58:32.134598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.494 [2024-12-10 22:58:32.134682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.134706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.134721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.134733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.134761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.144645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.144733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.144759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.144773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.144785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.144813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.154732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.154841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.154866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.154880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.154892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.154920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.164700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.164786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.164810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.164824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.164836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.164864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.174688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.174772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.174797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.174810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.174823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.174851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.184767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.184855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.184880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.184895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.184907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.184935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.194759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.194854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.194879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.194893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.194905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.194934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.204808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.204920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.204946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.204960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.204973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.205000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-12-10 22:58:32.214914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.495 [2024-12-10 22:58:32.214996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.495 [2024-12-10 22:58:32.215027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.495 [2024-12-10 22:58:32.215041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.495 [2024-12-10 22:58:32.215054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.495 [2024-12-10 22:58:32.215082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.754 [2024-12-10 22:58:32.224897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.754 [2024-12-10 22:58:32.224988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.754 [2024-12-10 22:58:32.225015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.754 [2024-12-10 22:58:32.225029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.754 [2024-12-10 22:58:32.225042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.225070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.234881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.234964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.234989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.235004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.235016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.235044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.244895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.244973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.244999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.245013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.245026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.245054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.254927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.255006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.255031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.255045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.255063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.255093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.265024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.265139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.265164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.265178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.265191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.265220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.275022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.275104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.275130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.275144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.275156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.275184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.285057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.285187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.285213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.285227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.285240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.285268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.295059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.295152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.295177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.295192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.295204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.295232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.305108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.305197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.305222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.305235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.305248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.305276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.315128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.315216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.315241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.315255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.315268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.315296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.325150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.325238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.325263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.325277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.325289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.325318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.335193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.335283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.335308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.335322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.335335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.335363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.345236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.345351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.345381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.345397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.345409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.345437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.355279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.355366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.355391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.355406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.355418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.355447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.365252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.365337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.755 [2024-12-10 22:58:32.365362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.755 [2024-12-10 22:58:32.365377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.755 [2024-12-10 22:58:32.365389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.755 [2024-12-10 22:58:32.365418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.755 qpair failed and we were unable to recover it. 00:27:24.755 [2024-12-10 22:58:32.375287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.755 [2024-12-10 22:58:32.375372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.375396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.375410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.375423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.375451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.385310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.385397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.385422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.385436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.385454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.385483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.395356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.395439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.395464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.395478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.395490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.395519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.405381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.405471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.405495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.405509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.405521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.405557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.415399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.415486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.415514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.415531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.415550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.415582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.425448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.425575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.425601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.425615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.425628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.425657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.435455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.435540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.435571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.435585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.435598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.435627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.445585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.445667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.445691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.445705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.445718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.445747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.455520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.455613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.455638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.455653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.455665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.455694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.465646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.465776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.465800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.465814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.465827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.465855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:24.756 [2024-12-10 22:58:32.475565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.756 [2024-12-10 22:58:32.475650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.756 [2024-12-10 22:58:32.475681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.756 [2024-12-10 22:58:32.475696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.756 [2024-12-10 22:58:32.475708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:24.756 [2024-12-10 22:58:32.475737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.756 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.485592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.485679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.485705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.485719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.485733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.485762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.495631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.495718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.495744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.495758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.495771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.495799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.505676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.505766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.505791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.505805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.505818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.505846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.515722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.515805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.515830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.515844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.515863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.515893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.525738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.525825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.525850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.525864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.525876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.525905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.535752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.535836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.535862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.535876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.535888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.535916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.545874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.545966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.545991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.546005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.546018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.546046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.555844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.555930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.555956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.555970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.555982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.556010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.565826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.565908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.565933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.565947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.565960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.565988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.575854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.575943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.575968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.575982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.575995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.017 [2024-12-10 22:58:32.576022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.017 qpair failed and we were unable to recover it. 00:27:25.017 [2024-12-10 22:58:32.585914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.017 [2024-12-10 22:58:32.586001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.017 [2024-12-10 22:58:32.586026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.017 [2024-12-10 22:58:32.586040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.017 [2024-12-10 22:58:32.586053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.586080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.595991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.596071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.596096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.596110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.596123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.596151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.605942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.606021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.606051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.606065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.606078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.606106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.616077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.616214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.616240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.616254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.616267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.616295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.626028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.626115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.626140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.626154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.626167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.626194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.636019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.636104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.636129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.636143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.636155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.636183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.646079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.646172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.646200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.646216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.646235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.646264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.656103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.656232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.656258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.656273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.656285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.656313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.666149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.666273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.666299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.666312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.666325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.666353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.676205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.676310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.676335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.676349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.676361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.676389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.686259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.686390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.686415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.686429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.686441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.686469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.696280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.696361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.696386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.696400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.696413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.696441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.706237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.706326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.706351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.706365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.706377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.706405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.716277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.018 [2024-12-10 22:58:32.716365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.018 [2024-12-10 22:58:32.716390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.018 [2024-12-10 22:58:32.716404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.018 [2024-12-10 22:58:32.716417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.018 [2024-12-10 22:58:32.716445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.018 qpair failed and we were unable to recover it. 00:27:25.018 [2024-12-10 22:58:32.726324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.019 [2024-12-10 22:58:32.726408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.019 [2024-12-10 22:58:32.726433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.019 [2024-12-10 22:58:32.726447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.019 [2024-12-10 22:58:32.726461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.019 [2024-12-10 22:58:32.726489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.019 qpair failed and we were unable to recover it. 00:27:25.019 [2024-12-10 22:58:32.736347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.019 [2024-12-10 22:58:32.736431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.019 [2024-12-10 22:58:32.736463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.019 [2024-12-10 22:58:32.736477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.019 [2024-12-10 22:58:32.736490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.019 [2024-12-10 22:58:32.736518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.019 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.746347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.746434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.746460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.746483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.746502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.746533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.756378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.756463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.756490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.756504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.756517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.756553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.766404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.766484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.766509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.766523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.766536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.766576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.776418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.776506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.776531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.776551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.776576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.776606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.786591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.786681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.786706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.786720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.786734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.786762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.796537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.796647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.796672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.796686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.796699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.796728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.806535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.806628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.806654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.806668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.806680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.806709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.816536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.816624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.279 [2024-12-10 22:58:32.816648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.279 [2024-12-10 22:58:32.816663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.279 [2024-12-10 22:58:32.816675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.279 [2024-12-10 22:58:32.816704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-12-10 22:58:32.826598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.279 [2024-12-10 22:58:32.826688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.826713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.826728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.826741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.826769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.836599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.836689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.836715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.836729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.836742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.836771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.846615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.846702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.846728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.846743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.846756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.846784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.856641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.856720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.856745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.856760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.856773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.856801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.866743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.866830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.866861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.866877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.866890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.866918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.876686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.876804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.876830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.876844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.876856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.876884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.886754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.886836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.886860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.886874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.886887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.886915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.896741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.896823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.896852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.896866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.896878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.896906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.906837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.906928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.906953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.906967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.906984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.907014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.916804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.916890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.916915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.916930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.916943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.916971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.926842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.926924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.926950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.926964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.926978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.927006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.936859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.936944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.936969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.936984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.936997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.937025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.946905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.947001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.947025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.947040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.947052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.947080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-12-10 22:58:32.956948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.280 [2024-12-10 22:58:32.957068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.280 [2024-12-10 22:58:32.957094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.280 [2024-12-10 22:58:32.957108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.280 [2024-12-10 22:58:32.957121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.280 [2024-12-10 22:58:32.957149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-12-10 22:58:32.966954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.281 [2024-12-10 22:58:32.967038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.281 [2024-12-10 22:58:32.967063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.281 [2024-12-10 22:58:32.967077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.281 [2024-12-10 22:58:32.967090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.281 [2024-12-10 22:58:32.967118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-12-10 22:58:32.976966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.281 [2024-12-10 22:58:32.977048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.281 [2024-12-10 22:58:32.977073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.281 [2024-12-10 22:58:32.977087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.281 [2024-12-10 22:58:32.977099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.281 [2024-12-10 22:58:32.977127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-12-10 22:58:32.987032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.281 [2024-12-10 22:58:32.987126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.281 [2024-12-10 22:58:32.987151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.281 [2024-12-10 22:58:32.987165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.281 [2024-12-10 22:58:32.987177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.281 [2024-12-10 22:58:32.987205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-12-10 22:58:32.997022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.281 [2024-12-10 22:58:32.997104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.281 [2024-12-10 22:58:32.997134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.281 [2024-12-10 22:58:32.997149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.281 [2024-12-10 22:58:32.997162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.281 [2024-12-10 22:58:32.997190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-12-10 22:58:33.007080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.281 [2024-12-10 22:58:33.007199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.281 [2024-12-10 22:58:33.007225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.281 [2024-12-10 22:58:33.007240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.281 [2024-12-10 22:58:33.007253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.281 [2024-12-10 22:58:33.007281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.017155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.017242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.017267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.017281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.017294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.017324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.027140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.027260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.027286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.027301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.027313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.027342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.037169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.037259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.037284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.037305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.037319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.037347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.047153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.047247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.047273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.047287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.047300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.047328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.057197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.057280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.057304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.057318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.057331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.057359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.067249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.067337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.067362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.067376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.067389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.067417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.077250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.077385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.077411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.077425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.077437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.077466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.087264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.087353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.087379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.087393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.087405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.087434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.097344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.097464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.542 [2024-12-10 22:58:33.097489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.542 [2024-12-10 22:58:33.097502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.542 [2024-12-10 22:58:33.097514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.542 [2024-12-10 22:58:33.097543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.542 qpair failed and we were unable to recover it. 00:27:25.542 [2024-12-10 22:58:33.107341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.542 [2024-12-10 22:58:33.107437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.107462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.107476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.107489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.107516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.117377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.117498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.117523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.117537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.117556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.117586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.127424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.127505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.127535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.127558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.127572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.127600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.137434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.137516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.137540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.137562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.137576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.137604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.147489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.147585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.147611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.147625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.147638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.147667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.157486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.157582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.157611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.157627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.157640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.157669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.167511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.167604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.167630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.167651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.167664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.167693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.177557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.177639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.177664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.177678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.177691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.177719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.187594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.187682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.187707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.187720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.187733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.187762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.197604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.197692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.197716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.197730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.197743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.197771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.207643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.207732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.207761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.207777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.207790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.207819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.217690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.217775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.217800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.217815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.217827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.217856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.227706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.227814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.227839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.227854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.227866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.227894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.543 qpair failed and we were unable to recover it. 00:27:25.543 [2024-12-10 22:58:33.237753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.543 [2024-12-10 22:58:33.237844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.543 [2024-12-10 22:58:33.237870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.543 [2024-12-10 22:58:33.237884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.543 [2024-12-10 22:58:33.237897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.543 [2024-12-10 22:58:33.237926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.544 qpair failed and we were unable to recover it. 00:27:25.544 [2024-12-10 22:58:33.247777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.544 [2024-12-10 22:58:33.247897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.544 [2024-12-10 22:58:33.247922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.544 [2024-12-10 22:58:33.247936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.544 [2024-12-10 22:58:33.247949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.544 [2024-12-10 22:58:33.247977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.544 qpair failed and we were unable to recover it. 00:27:25.544 [2024-12-10 22:58:33.257813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.544 [2024-12-10 22:58:33.257900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.544 [2024-12-10 22:58:33.257930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.544 [2024-12-10 22:58:33.257944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.544 [2024-12-10 22:58:33.257957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.544 [2024-12-10 22:58:33.257984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.544 qpair failed and we were unable to recover it. 00:27:25.544 [2024-12-10 22:58:33.267846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.544 [2024-12-10 22:58:33.267939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.544 [2024-12-10 22:58:33.267964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.544 [2024-12-10 22:58:33.267979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.544 [2024-12-10 22:58:33.267991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.544 [2024-12-10 22:58:33.268020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.544 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.277835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.277953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.277979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.277994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.278007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.278036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.287917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.288045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.288071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.288085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.288098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.288127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.297950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.298050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.298074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.298096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.298110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.298138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.307944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.308033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.308058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.308072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.308085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.308113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.317939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.318028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.318054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.318068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.318081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.318109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.328002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.328093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.328118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.328133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.328145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.328173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.338010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.338094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.338119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.338133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.338146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.338173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.348134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.348221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.348246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.348260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.348272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.348300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.358076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.358171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.358196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.358210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.358222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.358252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.368102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.368182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.368207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.368221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.368234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.368261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.378139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.378222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.378247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.378262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.378274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.378302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.804 qpair failed and we were unable to recover it. 00:27:25.804 [2024-12-10 22:58:33.388206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.804 [2024-12-10 22:58:33.388306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.804 [2024-12-10 22:58:33.388334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.804 [2024-12-10 22:58:33.388352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.804 [2024-12-10 22:58:33.388365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.804 [2024-12-10 22:58:33.388396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.398276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.398360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.398385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.398400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.398412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.398440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.408235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.408347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.408373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.408387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.408400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.408428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.418266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.418389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.418414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.418428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.418441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.418469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.428312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.428409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.428435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.428458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.428473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.428503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.438301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.438389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.438415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.438429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.438442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.438470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.448359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.448448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.448473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.448487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.448499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.448528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.458339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.458422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.458447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.458461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.458474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.458502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.468385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.468480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.468504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.468518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.468531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.468566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.478401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.478501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.478525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.478539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.478561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.478590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.488428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.488515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.488539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.488562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.488575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.488604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.498558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.498687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.498712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.498726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.498738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.498766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.508507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.508615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.508640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.508654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.508668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.508696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.518632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.518721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.518746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.805 [2024-12-10 22:58:33.518760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.805 [2024-12-10 22:58:33.518773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.805 [2024-12-10 22:58:33.518801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.805 qpair failed and we were unable to recover it. 00:27:25.805 [2024-12-10 22:58:33.528554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.805 [2024-12-10 22:58:33.528640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.805 [2024-12-10 22:58:33.528666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.806 [2024-12-10 22:58:33.528680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.806 [2024-12-10 22:58:33.528693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:25.806 [2024-12-10 22:58:33.528721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.806 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.538624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.538710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.538736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.538750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.538763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.538792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.548720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.548819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.548845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.548859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.548872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.548900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.558668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.558785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.558810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.558831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.558845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.558874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.568682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.568766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.568792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.568806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.568819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.568847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.578725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.578813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.578838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.578852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.578864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.578892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.588748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.588869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.588898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.588914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.588927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.588956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.598742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.598825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.598851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.598865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.598878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.598906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.608773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.608859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.608884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.608897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.608910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.608938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.618795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.618878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.618903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.065 [2024-12-10 22:58:33.618917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.065 [2024-12-10 22:58:33.618931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.065 [2024-12-10 22:58:33.618959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.065 qpair failed and we were unable to recover it. 00:27:26.065 [2024-12-10 22:58:33.628944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.065 [2024-12-10 22:58:33.629057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.065 [2024-12-10 22:58:33.629083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.629097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.629109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.629137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.638877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.638968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.638993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.639007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.639019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.639047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.648877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.648968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.648993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.649007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.649020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.649048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.658893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.658975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.659000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.659013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.659026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.659054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.668948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.669043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.669067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.669081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.669093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.669122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.679011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.679099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.679123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.679137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.679150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.679178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.689029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.689109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.689134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.689154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.689168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.689197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.699055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.699141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.699166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.699180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.699192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.699220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.709097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.709203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.709232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.709248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.709260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.709289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.719133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.719222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.719247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.719261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.719274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.719303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.729134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.729213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.729239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.729252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.729265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.729299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.739198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.739305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.739330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.739344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.739357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.739385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.066 [2024-12-10 22:58:33.749172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.066 [2024-12-10 22:58:33.749287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.066 [2024-12-10 22:58:33.749312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.066 [2024-12-10 22:58:33.749326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.066 [2024-12-10 22:58:33.749339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.066 [2024-12-10 22:58:33.749367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.066 qpair failed and we were unable to recover it. 00:27:26.067 [2024-12-10 22:58:33.759292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.067 [2024-12-10 22:58:33.759378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.067 [2024-12-10 22:58:33.759403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.067 [2024-12-10 22:58:33.759416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.067 [2024-12-10 22:58:33.759429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.067 [2024-12-10 22:58:33.759457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.067 qpair failed and we were unable to recover it. 00:27:26.067 [2024-12-10 22:58:33.769259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.067 [2024-12-10 22:58:33.769346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.067 [2024-12-10 22:58:33.769371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.067 [2024-12-10 22:58:33.769385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.067 [2024-12-10 22:58:33.769398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.067 [2024-12-10 22:58:33.769426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.067 qpair failed and we were unable to recover it. 00:27:26.067 [2024-12-10 22:58:33.779278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.067 [2024-12-10 22:58:33.779372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.067 [2024-12-10 22:58:33.779398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.067 [2024-12-10 22:58:33.779412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.067 [2024-12-10 22:58:33.779424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.067 [2024-12-10 22:58:33.779452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.067 qpair failed and we were unable to recover it. 00:27:26.067 [2024-12-10 22:58:33.789302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.067 [2024-12-10 22:58:33.789390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.067 [2024-12-10 22:58:33.789415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.067 [2024-12-10 22:58:33.789429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.067 [2024-12-10 22:58:33.789447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.067 [2024-12-10 22:58:33.789483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.067 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-10 22:58:33.799320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.326 [2024-12-10 22:58:33.799444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.326 [2024-12-10 22:58:33.799471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.326 [2024-12-10 22:58:33.799487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.326 [2024-12-10 22:58:33.799508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.326 [2024-12-10 22:58:33.799539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-10 22:58:33.809333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.326 [2024-12-10 22:58:33.809413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.326 [2024-12-10 22:58:33.809438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.326 [2024-12-10 22:58:33.809453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.326 [2024-12-10 22:58:33.809465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.326 [2024-12-10 22:58:33.809494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-10 22:58:33.819368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.326 [2024-12-10 22:58:33.819452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.326 [2024-12-10 22:58:33.819477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.326 [2024-12-10 22:58:33.819498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.326 [2024-12-10 22:58:33.819512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.326 [2024-12-10 22:58:33.819541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-10 22:58:33.829405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.326 [2024-12-10 22:58:33.829493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.326 [2024-12-10 22:58:33.829518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.326 [2024-12-10 22:58:33.829532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.326 [2024-12-10 22:58:33.829551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.326 [2024-12-10 22:58:33.829582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-10 22:58:33.839477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.326 [2024-12-10 22:58:33.839575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.326 [2024-12-10 22:58:33.839601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.326 [2024-12-10 22:58:33.839615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.326 [2024-12-10 22:58:33.839627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.326 [2024-12-10 22:58:33.839656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-10 22:58:33.849464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.326 [2024-12-10 22:58:33.849556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.326 [2024-12-10 22:58:33.849581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.326 [2024-12-10 22:58:33.849595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.326 [2024-12-10 22:58:33.849608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.326 [2024-12-10 22:58:33.849636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-10 22:58:33.859528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.326 [2024-12-10 22:58:33.859624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.326 [2024-12-10 22:58:33.859651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.326 [2024-12-10 22:58:33.859670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.859685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.859721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.869553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.869642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.869667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.869682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.869694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.869723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.879600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.879696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.879722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.879736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.879749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.879778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.889592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.889672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.889697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.889711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.889723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.889752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.899668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.899766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.899792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.899807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.899818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.899847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.909724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.909830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.909855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.909869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.909882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.909910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.919764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.919852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.919877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.919892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.919904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.919932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.929783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.929864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.929889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.929903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.929916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.929944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.939757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.939885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.939910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.939924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.939936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.939965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.949783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.949871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.949895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.949914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.949927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.949955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.959769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.959853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.959879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.959893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.959907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.959937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.969811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.969897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.969921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.969935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.969948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.969976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.979870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.979949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.979974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.979989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.980001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.980029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-10 22:58:33.989885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.327 [2024-12-10 22:58:33.989975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.327 [2024-12-10 22:58:33.990000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.327 [2024-12-10 22:58:33.990014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.327 [2024-12-10 22:58:33.990027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.327 [2024-12-10 22:58:33.990061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-10 22:58:33.999963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.328 [2024-12-10 22:58:34.000062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.328 [2024-12-10 22:58:34.000087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.328 [2024-12-10 22:58:34.000101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.328 [2024-12-10 22:58:34.000114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.328 [2024-12-10 22:58:34.000142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-10 22:58:34.009949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.328 [2024-12-10 22:58:34.010058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.328 [2024-12-10 22:58:34.010084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.328 [2024-12-10 22:58:34.010098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.328 [2024-12-10 22:58:34.010111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.328 [2024-12-10 22:58:34.010139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-10 22:58:34.019941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.328 [2024-12-10 22:58:34.020022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.328 [2024-12-10 22:58:34.020048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.328 [2024-12-10 22:58:34.020062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.328 [2024-12-10 22:58:34.020075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.328 [2024-12-10 22:58:34.020103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-10 22:58:34.029981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.328 [2024-12-10 22:58:34.030068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.328 [2024-12-10 22:58:34.030093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.328 [2024-12-10 22:58:34.030107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.328 [2024-12-10 22:58:34.030120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.328 [2024-12-10 22:58:34.030148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-10 22:58:34.040007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.328 [2024-12-10 22:58:34.040091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.328 [2024-12-10 22:58:34.040116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.328 [2024-12-10 22:58:34.040130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.328 [2024-12-10 22:58:34.040143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.328 [2024-12-10 22:58:34.040170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-10 22:58:34.050058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.328 [2024-12-10 22:58:34.050138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.328 [2024-12-10 22:58:34.050163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.328 [2024-12-10 22:58:34.050178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.328 [2024-12-10 22:58:34.050190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.328 [2024-12-10 22:58:34.050219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.587 [2024-12-10 22:58:34.060050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.587 [2024-12-10 22:58:34.060149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.587 [2024-12-10 22:58:34.060176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.587 [2024-12-10 22:58:34.060191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.587 [2024-12-10 22:58:34.060204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.587 [2024-12-10 22:58:34.060233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-12-10 22:58:34.070109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.587 [2024-12-10 22:58:34.070198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.587 [2024-12-10 22:58:34.070224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.587 [2024-12-10 22:58:34.070239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.587 [2024-12-10 22:58:34.070252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.587 [2024-12-10 22:58:34.070281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-12-10 22:58:34.080116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.587 [2024-12-10 22:58:34.080202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.587 [2024-12-10 22:58:34.080227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.587 [2024-12-10 22:58:34.080248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.587 [2024-12-10 22:58:34.080262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.587 [2024-12-10 22:58:34.080292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-12-10 22:58:34.090158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.587 [2024-12-10 22:58:34.090240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.587 [2024-12-10 22:58:34.090265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.587 [2024-12-10 22:58:34.090279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.587 [2024-12-10 22:58:34.090292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.587 [2024-12-10 22:58:34.090319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-12-10 22:58:34.100168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.587 [2024-12-10 22:58:34.100250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.587 [2024-12-10 22:58:34.100275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.587 [2024-12-10 22:58:34.100289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.587 [2024-12-10 22:58:34.100302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.587 [2024-12-10 22:58:34.100330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-12-10 22:58:34.110253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.587 [2024-12-10 22:58:34.110342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.587 [2024-12-10 22:58:34.110367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.110381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.110393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.110422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.120251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.120345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.120371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.120386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.120398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.120433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.130349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.130429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.130455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.130469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.130482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.130510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.140302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.140400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.140426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.140439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.140452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.140481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.150346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.150460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.150485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.150499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.150512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.150539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.160455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.160539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.160576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.160591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.160603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.160631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.170450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.170595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.170621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.170634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.170649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.170679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.180396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.180482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.180506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.180520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.180532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.180569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.190438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.190573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.190598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.190612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.190625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.190653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.200477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.200577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.200602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.200616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.200629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.200657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.210484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.210616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.210645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.210668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.210682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.210712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.220614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.220696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.220722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.220736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.220749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.220778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.230560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.230653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.230678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.230692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.230705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.230733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-12-10 22:58:34.240586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.588 [2024-12-10 22:58:34.240694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.588 [2024-12-10 22:58:34.240719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.588 [2024-12-10 22:58:34.240733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.588 [2024-12-10 22:58:34.240746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.588 [2024-12-10 22:58:34.240775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.589 [2024-12-10 22:58:34.250593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.589 [2024-12-10 22:58:34.250683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.589 [2024-12-10 22:58:34.250708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.589 [2024-12-10 22:58:34.250722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.589 [2024-12-10 22:58:34.250735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.589 [2024-12-10 22:58:34.250771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-12-10 22:58:34.260649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.589 [2024-12-10 22:58:34.260764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.589 [2024-12-10 22:58:34.260789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.589 [2024-12-10 22:58:34.260803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.589 [2024-12-10 22:58:34.260816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.589 [2024-12-10 22:58:34.260844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-12-10 22:58:34.270714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.589 [2024-12-10 22:58:34.270808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.589 [2024-12-10 22:58:34.270832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.589 [2024-12-10 22:58:34.270846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.589 [2024-12-10 22:58:34.270859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.589 [2024-12-10 22:58:34.270887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-12-10 22:58:34.280673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.589 [2024-12-10 22:58:34.280752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.589 [2024-12-10 22:58:34.280776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.589 [2024-12-10 22:58:34.280790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.589 [2024-12-10 22:58:34.280803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.589 [2024-12-10 22:58:34.280831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-12-10 22:58:34.290729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.589 [2024-12-10 22:58:34.290817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.589 [2024-12-10 22:58:34.290841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.589 [2024-12-10 22:58:34.290856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.589 [2024-12-10 22:58:34.290868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.589 [2024-12-10 22:58:34.290896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-12-10 22:58:34.300735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.589 [2024-12-10 22:58:34.300820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.589 [2024-12-10 22:58:34.300845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.589 [2024-12-10 22:58:34.300859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.589 [2024-12-10 22:58:34.300872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.589 [2024-12-10 22:58:34.300899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-12-10 22:58:34.310813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.589 [2024-12-10 22:58:34.310899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.589 [2024-12-10 22:58:34.310924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.589 [2024-12-10 22:58:34.310938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.589 [2024-12-10 22:58:34.310950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.589 [2024-12-10 22:58:34.310980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.320807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.320891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.320923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.320938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.320951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.320985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.330932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.331015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.331041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.331055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.331068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.331096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.340922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.341057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.341082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.341102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.341116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.341147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.350898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.351020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.351044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.351058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.351070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.351099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.361005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.361090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.361115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.361129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.361142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.361170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.370971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.371065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.371090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.371104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.371117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.371145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.380943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.381032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.381057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.381071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.381084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.381119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.391001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.391088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.391112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.391126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.391138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.391166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.401063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.401168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.401197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.401216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.401229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.401258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.411045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.411171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.850 [2024-12-10 22:58:34.411196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.850 [2024-12-10 22:58:34.411210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.850 [2024-12-10 22:58:34.411223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.850 [2024-12-10 22:58:34.411252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.850 qpair failed and we were unable to recover it. 00:27:26.850 [2024-12-10 22:58:34.421056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.850 [2024-12-10 22:58:34.421140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.421165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.421179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.421192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.421219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.431227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.431323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.431348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.431362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.431375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.431403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.441123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.441216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.441241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.441255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.441268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.441298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.451185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.451269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.451294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.451308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.451321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.451348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.461201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.461288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.461312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.461327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.461340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.461368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.471256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.471345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.471370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.471394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.471407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.471436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.481281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.481378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.481403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.481416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.481429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.481457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.491300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.491383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.491408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.491422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.491435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.491463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.501343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.501423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.501447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.501461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.501474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.501502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.511340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.511432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.511457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.511471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.511483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.511518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.521394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.521482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.521507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.521520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.521533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.521571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.531413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.531497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.531521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.531535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.531558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.531589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.541440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.541531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.541562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.541577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.541590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.851 [2024-12-10 22:58:34.541618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.851 qpair failed and we were unable to recover it. 00:27:26.851 [2024-12-10 22:58:34.551570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.851 [2024-12-10 22:58:34.551655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.851 [2024-12-10 22:58:34.551679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.851 [2024-12-10 22:58:34.551693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.851 [2024-12-10 22:58:34.551706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.852 [2024-12-10 22:58:34.551734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.852 qpair failed and we were unable to recover it. 00:27:26.852 [2024-12-10 22:58:34.561502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.852 [2024-12-10 22:58:34.561630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.852 [2024-12-10 22:58:34.561655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.852 [2024-12-10 22:58:34.561669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.852 [2024-12-10 22:58:34.561681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.852 [2024-12-10 22:58:34.561711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.852 qpair failed and we were unable to recover it. 00:27:26.852 [2024-12-10 22:58:34.571513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.852 [2024-12-10 22:58:34.571606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.852 [2024-12-10 22:58:34.571631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.852 [2024-12-10 22:58:34.571644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.852 [2024-12-10 22:58:34.571657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:26.852 [2024-12-10 22:58:34.571686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.852 qpair failed and we were unable to recover it. 00:27:27.113 [2024-12-10 22:58:34.581537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.113 [2024-12-10 22:58:34.581625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.113 [2024-12-10 22:58:34.581652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.113 [2024-12-10 22:58:34.581666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.113 [2024-12-10 22:58:34.581680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.113 [2024-12-10 22:58:34.581709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.113 qpair failed and we were unable to recover it. 00:27:27.113 [2024-12-10 22:58:34.591646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.113 [2024-12-10 22:58:34.591739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.113 [2024-12-10 22:58:34.591765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.113 [2024-12-10 22:58:34.591779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.113 [2024-12-10 22:58:34.591792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.113 [2024-12-10 22:58:34.591821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.113 qpair failed and we were unable to recover it. 00:27:27.113 [2024-12-10 22:58:34.601625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.113 [2024-12-10 22:58:34.601727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.113 [2024-12-10 22:58:34.601752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.113 [2024-12-10 22:58:34.601774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.113 [2024-12-10 22:58:34.601788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.113 [2024-12-10 22:58:34.601816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.113 qpair failed and we were unable to recover it. 00:27:27.113 [2024-12-10 22:58:34.611629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.113 [2024-12-10 22:58:34.611717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.113 [2024-12-10 22:58:34.611743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.113 [2024-12-10 22:58:34.611757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.113 [2024-12-10 22:58:34.611770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.113 [2024-12-10 22:58:34.611797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.113 qpair failed and we were unable to recover it. 00:27:27.113 [2024-12-10 22:58:34.621647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.113 [2024-12-10 22:58:34.621741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.113 [2024-12-10 22:58:34.621766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.113 [2024-12-10 22:58:34.621780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.113 [2024-12-10 22:58:34.621793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.113 [2024-12-10 22:58:34.621822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.113 qpair failed and we were unable to recover it. 00:27:27.113 [2024-12-10 22:58:34.631696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.113 [2024-12-10 22:58:34.631786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.113 [2024-12-10 22:58:34.631811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.113 [2024-12-10 22:58:34.631825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.113 [2024-12-10 22:58:34.631839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.113 [2024-12-10 22:58:34.631867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.113 qpair failed and we were unable to recover it. 00:27:27.113 [2024-12-10 22:58:34.641729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.641819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.641845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.641859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.641872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.641907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.651765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.651885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.651910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.651924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.651937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.651966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.661760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.661838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.661863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.661878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.661890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.661918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.671848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.671941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.671969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.671986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.671999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.672028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.681870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.681953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.681978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.681992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.682005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.682034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.691872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.691968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.691993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.692008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.692020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.692048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.701879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.701960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.701984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.701999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.702011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.702038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.711915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.712004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.712029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.712043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.712055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.712083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.721949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.722042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.722067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.722081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.722094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.722121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.732013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.732103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.732127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.732147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.732161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.732189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.742048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.742138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.742163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.742177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.742190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.742219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.752064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.114 [2024-12-10 22:58:34.752154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.114 [2024-12-10 22:58:34.752180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.114 [2024-12-10 22:58:34.752194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.114 [2024-12-10 22:58:34.752206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.114 [2024-12-10 22:58:34.752234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.114 qpair failed and we were unable to recover it. 00:27:27.114 [2024-12-10 22:58:34.762048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.762137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.762161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.762176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.762188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.762216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.115 [2024-12-10 22:58:34.772116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.772200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.772225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.772239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.772252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.772286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.115 [2024-12-10 22:58:34.782139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.782225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.782250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.782264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.782277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.782305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.115 [2024-12-10 22:58:34.792175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.792266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.792291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.792305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.792318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.792346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.115 [2024-12-10 22:58:34.802174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.802266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.802291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.802305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.802317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.802346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.115 [2024-12-10 22:58:34.812228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.812357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.812382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.812396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.812408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.812436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.115 [2024-12-10 22:58:34.822248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.822326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.822352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.822366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.822379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.822407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.115 [2024-12-10 22:58:34.832278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.115 [2024-12-10 22:58:34.832398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.115 [2024-12-10 22:58:34.832423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.115 [2024-12-10 22:58:34.832437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.115 [2024-12-10 22:58:34.832450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.115 [2024-12-10 22:58:34.832478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.115 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.842307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.842391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.842417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.842431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.842444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.842475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.852308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.852396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.852422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.852436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.852449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.852477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.862374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.862456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.862487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.862502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.862515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.862550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.872396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.872486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.872511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.872525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.872538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.872575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.882405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.882501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.882526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.882540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.882562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.882590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.892419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.892503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.892528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.892542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.892563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.892591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.902449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.902534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.902565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.902580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.902592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.902625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.912538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.912633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.912659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.912673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.912686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.912714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.922515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.922606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.922631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.922644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.922657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.922685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.932570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.377 [2024-12-10 22:58:34.932684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.377 [2024-12-10 22:58:34.932709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.377 [2024-12-10 22:58:34.932722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.377 [2024-12-10 22:58:34.932735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.377 [2024-12-10 22:58:34.932764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.377 qpair failed and we were unable to recover it. 00:27:27.377 [2024-12-10 22:58:34.942601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:34.942687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:34.942713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:34.942726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:34.942739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:34.942767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:34.952688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:34.952778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:34.952801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:34.952815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:34.952826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:34.952854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:34.962689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:34.962775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:34.962800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:34.962814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:34.962826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:34.962854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:34.972721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:34.972804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:34.972829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:34.972843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:34.972855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:34.972883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:34.982722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:34.982808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:34.982833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:34.982847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:34.982859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:34.982887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:34.992734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:34.992828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:34.992857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:34.992873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:34.992886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:34.992914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:35.002784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:35.002875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:35.002900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:35.002915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:35.002927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:35.002956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:35.012782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:35.012863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:35.012888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:35.012902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:35.012915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:35.012943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:35.022837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:35.022959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:35.022984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:35.022999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:35.023011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:35.023039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:35.032849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:35.032940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:35.032965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:35.032979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:35.032992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:35.033026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:35.042887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.378 [2024-12-10 22:58:35.042990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.378 [2024-12-10 22:58:35.043015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.378 [2024-12-10 22:58:35.043029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.378 [2024-12-10 22:58:35.043042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.378 [2024-12-10 22:58:35.043070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.378 qpair failed and we were unable to recover it. 00:27:27.378 [2024-12-10 22:58:35.052896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.379 [2024-12-10 22:58:35.053023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.379 [2024-12-10 22:58:35.053048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.379 [2024-12-10 22:58:35.053062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.379 [2024-12-10 22:58:35.053075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.379 [2024-12-10 22:58:35.053102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.379 qpair failed and we were unable to recover it. 00:27:27.379 [2024-12-10 22:58:35.062910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.379 [2024-12-10 22:58:35.062987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.379 [2024-12-10 22:58:35.063012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.379 [2024-12-10 22:58:35.063026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.379 [2024-12-10 22:58:35.063039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.379 [2024-12-10 22:58:35.063066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.379 qpair failed and we were unable to recover it. 00:27:27.379 [2024-12-10 22:58:35.072971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.379 [2024-12-10 22:58:35.073058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.379 [2024-12-10 22:58:35.073082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.379 [2024-12-10 22:58:35.073096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.379 [2024-12-10 22:58:35.073108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.379 [2024-12-10 22:58:35.073139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.379 qpair failed and we were unable to recover it. 00:27:27.379 [2024-12-10 22:58:35.083063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.379 [2024-12-10 22:58:35.083178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.379 [2024-12-10 22:58:35.083204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.379 [2024-12-10 22:58:35.083217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.379 [2024-12-10 22:58:35.083230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.379 [2024-12-10 22:58:35.083258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.379 qpair failed and we were unable to recover it. 00:27:27.379 [2024-12-10 22:58:35.093032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.379 [2024-12-10 22:58:35.093127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.379 [2024-12-10 22:58:35.093152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.379 [2024-12-10 22:58:35.093166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.379 [2024-12-10 22:58:35.093179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.379 [2024-12-10 22:58:35.093207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.379 qpair failed and we were unable to recover it. 00:27:27.379 [2024-12-10 22:58:35.103040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.379 [2024-12-10 22:58:35.103125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.379 [2024-12-10 22:58:35.103152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.379 [2024-12-10 22:58:35.103166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.379 [2024-12-10 22:58:35.103179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.379 [2024-12-10 22:58:35.103208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.379 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-10 22:58:35.113174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.640 [2024-12-10 22:58:35.113266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.640 [2024-12-10 22:58:35.113291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.640 [2024-12-10 22:58:35.113306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.640 [2024-12-10 22:58:35.113319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.640 [2024-12-10 22:58:35.113347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-10 22:58:35.123111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.640 [2024-12-10 22:58:35.123196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.640 [2024-12-10 22:58:35.123227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.640 [2024-12-10 22:58:35.123243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.640 [2024-12-10 22:58:35.123256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.640 [2024-12-10 22:58:35.123285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-10 22:58:35.133168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.640 [2024-12-10 22:58:35.133273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.640 [2024-12-10 22:58:35.133299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.640 [2024-12-10 22:58:35.133314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.640 [2024-12-10 22:58:35.133327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ea6fa0 00:27:27.640 [2024-12-10 22:58:35.133356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-10 22:58:35.143206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.640 [2024-12-10 22:58:35.143291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.640 [2024-12-10 22:58:35.143326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.640 [2024-12-10 22:58:35.143342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.640 [2024-12-10 22:58:35.143356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08dc000b90 00:27:27.640 [2024-12-10 22:58:35.143388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-10 22:58:35.153227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.640 [2024-12-10 22:58:35.153320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.640 [2024-12-10 22:58:35.153347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.640 [2024-12-10 22:58:35.153363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.640 [2024-12-10 22:58:35.153376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f08dc000b90 00:27:27.640 [2024-12-10 22:58:35.153407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-10 22:58:35.153562] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:27.640 A controller has encountered a failure and is being reset. 00:27:27.640 [2024-12-10 22:58:35.153628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb4f30 (9): Bad file descriptor 00:27:27.640 Controller properly reset. 00:27:27.640 Initializing NVMe Controllers 00:27:27.640 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:27.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:27.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:27.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:27.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:27.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:27.640 Initialization complete. Launching workers. 00:27:27.640 Starting thread on core 1 00:27:27.640 Starting thread on core 2 00:27:27.640 Starting thread on core 3 00:27:27.640 Starting thread on core 0 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:27.640 00:27:27.640 real 0m10.746s 00:27:27.640 user 0m19.491s 00:27:27.640 sys 0m4.989s 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.640 ************************************ 00:27:27.640 END TEST nvmf_target_disconnect_tc2 00:27:27.640 ************************************ 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:27.640 rmmod nvme_tcp 00:27:27.640 rmmod nvme_fabrics 00:27:27.640 rmmod nvme_keyring 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 180440 ']' 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 180440 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 180440 ']' 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 180440 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 180440 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 180440' 00:27:27.640 killing process with pid 180440 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 180440 00:27:27.640 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 180440 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.899 22:58:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.434 22:58:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:30.434 00:27:30.434 real 0m15.769s 00:27:30.434 user 0m45.862s 00:27:30.434 sys 0m7.085s 00:27:30.434 22:58:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.434 22:58:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:30.434 ************************************ 00:27:30.434 END TEST nvmf_target_disconnect 00:27:30.434 ************************************ 00:27:30.434 22:58:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:30.434 00:27:30.434 real 5m6.596s 00:27:30.434 user 10m53.922s 00:27:30.434 sys 1m13.079s 00:27:30.434 22:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.434 22:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.434 ************************************ 00:27:30.434 END TEST nvmf_host 00:27:30.434 ************************************ 00:27:30.434 22:58:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:30.434 22:58:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:30.434 22:58:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:30.434 22:58:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:30.434 22:58:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.434 22:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:30.434 ************************************ 00:27:30.434 START TEST nvmf_target_core_interrupt_mode 00:27:30.434 ************************************ 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:30.434 * Looking for test storage... 00:27:30.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.434 --rc genhtml_branch_coverage=1 00:27:30.434 --rc genhtml_function_coverage=1 00:27:30.434 --rc genhtml_legend=1 00:27:30.434 --rc geninfo_all_blocks=1 00:27:30.434 --rc geninfo_unexecuted_blocks=1 00:27:30.434 00:27:30.434 ' 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.434 --rc genhtml_branch_coverage=1 00:27:30.434 --rc genhtml_function_coverage=1 00:27:30.434 --rc genhtml_legend=1 00:27:30.434 --rc geninfo_all_blocks=1 00:27:30.434 --rc geninfo_unexecuted_blocks=1 00:27:30.434 00:27:30.434 ' 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.434 --rc genhtml_branch_coverage=1 00:27:30.434 --rc genhtml_function_coverage=1 00:27:30.434 --rc genhtml_legend=1 00:27:30.434 --rc geninfo_all_blocks=1 00:27:30.434 --rc geninfo_unexecuted_blocks=1 00:27:30.434 00:27:30.434 ' 00:27:30.434 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.435 --rc genhtml_branch_coverage=1 00:27:30.435 --rc genhtml_function_coverage=1 00:27:30.435 --rc genhtml_legend=1 00:27:30.435 --rc geninfo_all_blocks=1 00:27:30.435 --rc geninfo_unexecuted_blocks=1 00:27:30.435 00:27:30.435 ' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:30.435 ************************************ 00:27:30.435 START TEST nvmf_abort 00:27:30.435 ************************************ 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:30.435 * Looking for test storage... 00:27:30.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:30.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.435 --rc genhtml_branch_coverage=1 00:27:30.435 --rc genhtml_function_coverage=1 00:27:30.435 --rc genhtml_legend=1 00:27:30.435 --rc geninfo_all_blocks=1 00:27:30.435 --rc geninfo_unexecuted_blocks=1 00:27:30.435 00:27:30.435 ' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:30.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.435 --rc genhtml_branch_coverage=1 00:27:30.435 --rc genhtml_function_coverage=1 00:27:30.435 --rc genhtml_legend=1 00:27:30.435 --rc geninfo_all_blocks=1 00:27:30.435 --rc geninfo_unexecuted_blocks=1 00:27:30.435 00:27:30.435 ' 00:27:30.435 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:30.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.436 --rc genhtml_branch_coverage=1 00:27:30.436 --rc genhtml_function_coverage=1 00:27:30.436 --rc genhtml_legend=1 00:27:30.436 --rc geninfo_all_blocks=1 00:27:30.436 --rc geninfo_unexecuted_blocks=1 00:27:30.436 00:27:30.436 ' 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:30.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.436 --rc genhtml_branch_coverage=1 00:27:30.436 --rc genhtml_function_coverage=1 00:27:30.436 --rc genhtml_legend=1 00:27:30.436 --rc geninfo_all_blocks=1 00:27:30.436 --rc geninfo_unexecuted_blocks=1 00:27:30.436 00:27:30.436 ' 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.436 22:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.436 22:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:32.339 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:32.339 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:32.339 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:32.339 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:32.339 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.340 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:32.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:27:32.598 00:27:32.598 --- 10.0.0.2 ping statistics --- 00:27:32.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.598 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:27:32.598 00:27:32.598 --- 10.0.0.1 ping statistics --- 00:27:32.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.598 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:32.598 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=183270 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 183270 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 183270 ']' 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.599 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.599 [2024-12-10 22:58:40.269667] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:32.599 [2024-12-10 22:58:40.270769] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:27:32.599 [2024-12-10 22:58:40.270822] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.859 [2024-12-10 22:58:40.344644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:32.859 [2024-12-10 22:58:40.406879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.859 [2024-12-10 22:58:40.406942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.859 [2024-12-10 22:58:40.406971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.859 [2024-12-10 22:58:40.406983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.859 [2024-12-10 22:58:40.406992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.859 [2024-12-10 22:58:40.408681] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.859 [2024-12-10 22:58:40.408705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.859 [2024-12-10 22:58:40.408709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.859 [2024-12-10 22:58:40.509741] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:32.859 [2024-12-10 22:58:40.509933] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:32.859 [2024-12-10 22:58:40.509965] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:32.859 [2024-12-10 22:58:40.510186] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.859 [2024-12-10 22:58:40.561557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.859 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.118 Malloc0 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.118 Delay0 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.118 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.119 [2024-12-10 22:58:40.637803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.119 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.119 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:33.119 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.119 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.119 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.119 22:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:33.119 [2024-12-10 22:58:40.746412] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:35.656 Initializing NVMe Controllers 00:27:35.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:35.656 controller IO queue size 128 less than required 00:27:35.656 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:35.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:35.656 Initialization complete. Launching workers. 00:27:35.656 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28336 00:27:35.656 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28393, failed to submit 66 00:27:35.656 success 28336, unsuccessful 57, failed 0 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.656 rmmod nvme_tcp 00:27:35.656 rmmod nvme_fabrics 00:27:35.656 rmmod nvme_keyring 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:35.656 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 183270 ']' 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 183270 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 183270 ']' 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 183270 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183270 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183270' 00:27:35.657 killing process with pid 183270 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 183270 00:27:35.657 22:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 183270 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.657 22:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.565 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.565 00:27:37.565 real 0m7.421s 00:27:37.565 user 0m9.505s 00:27:37.565 sys 0m2.942s 00:27:37.565 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.565 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.565 ************************************ 00:27:37.565 END TEST nvmf_abort 00:27:37.565 ************************************ 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:37.825 ************************************ 00:27:37.825 START TEST nvmf_ns_hotplug_stress 00:27:37.825 ************************************ 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:37.825 * Looking for test storage... 00:27:37.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:37.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.825 --rc genhtml_branch_coverage=1 00:27:37.825 --rc genhtml_function_coverage=1 00:27:37.825 --rc genhtml_legend=1 00:27:37.825 --rc geninfo_all_blocks=1 00:27:37.825 --rc geninfo_unexecuted_blocks=1 00:27:37.825 00:27:37.825 ' 00:27:37.825 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:37.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.826 --rc genhtml_branch_coverage=1 00:27:37.826 --rc genhtml_function_coverage=1 00:27:37.826 --rc genhtml_legend=1 00:27:37.826 --rc geninfo_all_blocks=1 00:27:37.826 --rc geninfo_unexecuted_blocks=1 00:27:37.826 00:27:37.826 ' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:37.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.826 --rc genhtml_branch_coverage=1 00:27:37.826 --rc genhtml_function_coverage=1 00:27:37.826 --rc genhtml_legend=1 00:27:37.826 --rc geninfo_all_blocks=1 00:27:37.826 --rc geninfo_unexecuted_blocks=1 00:27:37.826 00:27:37.826 ' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:37.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.826 --rc genhtml_branch_coverage=1 00:27:37.826 --rc genhtml_function_coverage=1 00:27:37.826 --rc genhtml_legend=1 00:27:37.826 --rc geninfo_all_blocks=1 00:27:37.826 --rc geninfo_unexecuted_blocks=1 00:27:37.826 00:27:37.826 ' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.826 22:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:40.362 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:40.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:40.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:40.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:40.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.363 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:40.363 00:27:40.364 --- 10.0.0.2 ping statistics --- 00:27:40.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.364 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:27:40.364 00:27:40.364 --- 10.0.0.1 ping statistics --- 00:27:40.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.364 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=185489 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 185489 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 185489 ']' 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:40.364 [2024-12-10 22:58:47.719433] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:40.364 [2024-12-10 22:58:47.720542] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:27:40.364 [2024-12-10 22:58:47.720607] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.364 [2024-12-10 22:58:47.797235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.364 [2024-12-10 22:58:47.860104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.364 [2024-12-10 22:58:47.860155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.364 [2024-12-10 22:58:47.860182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.364 [2024-12-10 22:58:47.860193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.364 [2024-12-10 22:58:47.860202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.364 [2024-12-10 22:58:47.861802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.364 [2024-12-10 22:58:47.861876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.364 [2024-12-10 22:58:47.861879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.364 [2024-12-10 22:58:47.951690] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:40.364 [2024-12-10 22:58:47.951892] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:40.364 [2024-12-10 22:58:47.951922] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:40.364 [2024-12-10 22:58:47.952136] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.364 22:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:40.364 22:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.364 22:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:40.364 22:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:40.623 [2024-12-10 22:58:48.250688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.623 22:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:40.881 22:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.139 [2024-12-10 22:58:48.786900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.139 22:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:41.399 22:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:41.658 Malloc0 00:27:41.658 22:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:41.917 Delay0 00:27:42.175 22:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.435 22:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:42.693 NULL1 00:27:42.693 22:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:42.950 22:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=185906 00:27:42.950 22:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:42.950 22:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:42.950 22:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.362 Read completed with error (sct=0, sc=11) 00:27:44.362 22:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.362 22:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:44.362 22:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:44.649 true 00:27:44.649 22:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:44.649 22:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.585 22:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.585 22:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:45.585 22:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:45.844 true 00:27:45.844 22:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:45.844 22:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.102 22:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.361 22:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:46.361 22:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:46.619 true 00:27:46.619 22:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:46.619 22:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.186 22:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.186 22:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:47.186 22:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:47.444 true 00:27:47.444 22:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:47.444 22:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.821 22:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.821 22:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:48.821 22:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:49.079 true 00:27:49.079 22:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:49.079 22:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.337 22:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.595 22:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:49.595 22:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:49.853 true 00:27:49.853 22:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:49.853 22:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.111 22:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.678 22:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:50.678 22:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:50.678 true 00:27:50.678 22:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:50.678 22:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.051 22:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.051 22:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:52.051 22:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:52.309 true 00:27:52.309 22:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:52.309 22:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.567 22:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.825 22:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:52.825 22:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:53.083 true 00:27:53.083 22:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:53.083 22:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.340 22:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.598 22:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:53.598 22:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:53.856 true 00:27:53.856 22:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:53.856 22:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.795 22:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.053 22:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:55.053 22:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:55.311 true 00:27:55.311 22:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:55.311 22:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.569 22:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.827 22:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:55.827 22:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:56.085 true 00:27:56.085 22:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:56.085 22:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.343 22:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.601 22:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:56.601 22:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:56.859 true 00:27:56.859 22:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:56.859 22:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.796 22:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.313 22:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:58.313 22:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:58.571 true 00:27:58.571 22:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:58.571 22:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.829 22:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.087 22:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:59.087 22:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:59.345 true 00:27:59.345 22:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:27:59.345 22:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.282 22:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.282 22:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:00.282 22:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:00.540 true 00:28:00.540 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:00.540 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.798 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.057 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:01.057 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:01.315 true 00:28:01.315 22:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:01.315 22:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.251 22:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.510 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:02.510 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:02.768 true 00:28:02.768 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:02.768 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.026 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.284 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:03.284 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:03.541 true 00:28:03.541 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:03.542 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.478 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.478 22:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:04.478 22:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:04.736 true 00:28:04.995 22:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:04.995 22:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.254 22:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.512 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:05.512 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:05.770 true 00:28:05.770 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:05.770 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.028 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.286 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:06.286 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:06.544 true 00:28:06.544 22:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:06.544 22:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.482 22:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.740 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:07.740 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:07.999 true 00:28:07.999 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:07.999 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.257 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.515 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:08.515 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:08.773 true 00:28:08.773 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:08.773 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.710 22:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.968 22:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:09.968 22:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:10.226 true 00:28:10.226 22:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:10.226 22:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.513 22:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.803 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:10.803 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:10.803 true 00:28:10.803 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:10.803 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.371 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.371 22:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:11.371 22:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:11.629 true 00:28:11.629 22:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:11.629 22:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.004 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.004 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:13.004 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:13.262 Initializing NVMe Controllers 00:28:13.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.262 Controller IO queue size 128, less than required. 00:28:13.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.262 Controller IO queue size 128, less than required. 00:28:13.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:13.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:13.262 Initialization complete. Launching workers. 00:28:13.262 ======================================================== 00:28:13.262 Latency(us) 00:28:13.262 Device Information : IOPS MiB/s Average min max 00:28:13.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 685.18 0.33 84304.04 3417.52 1068373.66 00:28:13.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9373.61 4.58 13655.14 2203.26 447819.69 00:28:13.262 ======================================================== 00:28:13.262 Total : 10058.79 4.91 18467.56 2203.26 1068373.66 00:28:13.262 00:28:13.262 true 00:28:13.262 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 185906 00:28:13.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (185906) - No such process 00:28:13.262 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 185906 00:28:13.262 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.521 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.779 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:13.779 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:13.779 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:13.779 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:13.779 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:14.037 null0 00:28:14.037 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.037 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.037 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:14.295 null1 00:28:14.295 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.295 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.295 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:14.555 null2 00:28:14.555 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.555 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.555 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:14.814 null3 00:28:14.814 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.814 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.814 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:15.073 null4 00:28:15.073 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.073 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.073 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:15.331 null5 00:28:15.589 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.589 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.589 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:15.847 null6 00:28:15.848 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.848 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.848 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:16.106 null7 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.106 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 189792 189793 189795 189797 189799 189801 189803 189805 00:28:16.107 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.107 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.363 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.621 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.879 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.879 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.879 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.879 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.879 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.880 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.880 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.880 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.138 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.396 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.396 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.396 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.396 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.396 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.654 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.654 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.654 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.912 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.171 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.429 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.688 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.946 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.946 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.946 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.946 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.946 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.947 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.205 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.205 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.463 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.463 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.463 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.463 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.463 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.463 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.721 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.721 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.722 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.980 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.238 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.497 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.755 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.014 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.272 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.272 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.272 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.272 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.272 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.272 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.530 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.789 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.047 rmmod nvme_tcp 00:28:22.047 rmmod nvme_fabrics 00:28:22.047 rmmod nvme_keyring 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 185489 ']' 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 185489 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 185489 ']' 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 185489 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185489 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185489' 00:28:22.047 killing process with pid 185489 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 185489 00:28:22.047 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 185489 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.306 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.840 22:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.840 00:28:24.840 real 0m46.660s 00:28:24.840 user 3m15.775s 00:28:24.840 sys 0m21.620s 00:28:24.840 22:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.840 22:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:24.840 ************************************ 00:28:24.840 END TEST nvmf_ns_hotplug_stress 00:28:24.840 ************************************ 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:24.840 ************************************ 00:28:24.840 START TEST nvmf_delete_subsystem 00:28:24.840 ************************************ 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:24.840 * Looking for test storage... 00:28:24.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.840 --rc genhtml_branch_coverage=1 00:28:24.840 --rc genhtml_function_coverage=1 00:28:24.840 --rc genhtml_legend=1 00:28:24.840 --rc geninfo_all_blocks=1 00:28:24.840 --rc geninfo_unexecuted_blocks=1 00:28:24.840 00:28:24.840 ' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.840 --rc genhtml_branch_coverage=1 00:28:24.840 --rc genhtml_function_coverage=1 00:28:24.840 --rc genhtml_legend=1 00:28:24.840 --rc geninfo_all_blocks=1 00:28:24.840 --rc geninfo_unexecuted_blocks=1 00:28:24.840 00:28:24.840 ' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.840 --rc genhtml_branch_coverage=1 00:28:24.840 --rc genhtml_function_coverage=1 00:28:24.840 --rc genhtml_legend=1 00:28:24.840 --rc geninfo_all_blocks=1 00:28:24.840 --rc geninfo_unexecuted_blocks=1 00:28:24.840 00:28:24.840 ' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.840 --rc genhtml_branch_coverage=1 00:28:24.840 --rc genhtml_function_coverage=1 00:28:24.840 --rc genhtml_legend=1 00:28:24.840 --rc geninfo_all_blocks=1 00:28:24.840 --rc geninfo_unexecuted_blocks=1 00:28:24.840 00:28:24.840 ' 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.840 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.841 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.743 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:26.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:26.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:26.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:26.744 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.744 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:28:27.003 00:28:27.003 --- 10.0.0.2 ping statistics --- 00:28:27.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.003 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:28:27.003 00:28:27.003 --- 10.0.0.1 ping statistics --- 00:28:27.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.003 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=192676 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 192676 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 192676 ']' 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.003 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.003 [2024-12-10 22:59:34.572511] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:27.003 [2024-12-10 22:59:34.573641] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:27.003 [2024-12-10 22:59:34.573699] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.003 [2024-12-10 22:59:34.644754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:27.003 [2024-12-10 22:59:34.699087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.003 [2024-12-10 22:59:34.699141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.003 [2024-12-10 22:59:34.699168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.003 [2024-12-10 22:59:34.699178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.003 [2024-12-10 22:59:34.699187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.004 [2024-12-10 22:59:34.700432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.004 [2024-12-10 22:59:34.700437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.262 [2024-12-10 22:59:34.785027] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:27.262 [2024-12-10 22:59:34.785030] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:27.262 [2024-12-10 22:59:34.785316] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.262 [2024-12-10 22:59:34.841053] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.262 [2024-12-10 22:59:34.861276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.262 NULL1 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.262 Delay0 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.262 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.263 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.263 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.263 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.263 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=192706 00:28:27.263 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:27.263 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:27.263 [2024-12-10 22:59:34.937583] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:29.786 22:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.786 22:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.786 22:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 starting I/O failed: -6 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.786 [2024-12-10 22:59:37.070906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb224a0 is same with the state(6) to be set 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Write completed with error (sct=0, sc=8) 00:28:29.786 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 starting I/O failed: -6 00:28:29.787 [2024-12-10 22:59:37.071688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f407400d4f0 is same with the state(6) to be set 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Write completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:29.787 Read completed with error (sct=0, sc=8) 00:28:30.353 [2024-12-10 22:59:38.034290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb239b0 is same with the state(6) to be set 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 [2024-12-10 22:59:38.068816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22680 is same with the state(6) to be set 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 [2024-12-10 22:59:38.069017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb222c0 is same with the state(6) to be set 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 [2024-12-10 22:59:38.073901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f407400d060 is same with the state(6) to be set 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Read completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 Write completed with error (sct=0, sc=8) 00:28:30.353 [2024-12-10 22:59:38.074077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f407400d820 is same with the state(6) to be set 00:28:30.353 Initializing NVMe Controllers 00:28:30.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.353 Controller IO queue size 128, less than required. 00:28:30.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:30.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:30.353 Initialization complete. Launching workers. 00:28:30.353 ======================================================== 00:28:30.353 Latency(us) 00:28:30.353 Device Information : IOPS MiB/s Average min max 00:28:30.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.36 0.07 937492.78 605.41 1011538.81 00:28:30.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.77 0.08 906645.78 380.79 1012902.40 00:28:30.353 ======================================================== 00:28:30.353 Total : 317.13 0.15 921465.85 380.79 1012902.40 00:28:30.353 00:28:30.353 [2024-12-10 22:59:38.075000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb239b0 (9): Bad file descriptor 00:28:30.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:30.353 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.353 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:30.353 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 192706 00:28:30.353 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 192706 00:28:30.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (192706) - No such process 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 192706 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 192706 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 192706 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.920 [2024-12-10 22:59:38.597232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=193218 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:30.920 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:31.178 [2024-12-10 22:59:38.657441] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:31.436 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:31.436 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:31.436 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:32.001 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:32.001 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:32.001 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:32.571 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:32.571 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:32.571 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:33.137 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:33.137 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:33.137 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:33.395 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:33.395 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:33.395 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:33.960 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:33.960 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:33.960 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:34.218 Initializing NVMe Controllers 00:28:34.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.218 Controller IO queue size 128, less than required. 00:28:34.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:34.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:34.218 Initialization complete. Launching workers. 00:28:34.218 ======================================================== 00:28:34.218 Latency(us) 00:28:34.218 Device Information : IOPS MiB/s Average min max 00:28:34.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004947.76 1000183.19 1045411.59 00:28:34.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005859.54 1000292.87 1042772.87 00:28:34.218 ======================================================== 00:28:34.218 Total : 256.00 0.12 1005403.65 1000183.19 1045411.59 00:28:34.218 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193218 00:28:34.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (193218) - No such process 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 193218 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.476 rmmod nvme_tcp 00:28:34.476 rmmod nvme_fabrics 00:28:34.476 rmmod nvme_keyring 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 192676 ']' 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 192676 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 192676 ']' 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 192676 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.476 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 192676 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 192676' 00:28:34.734 killing process with pid 192676 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 192676 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 192676 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.734 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.735 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.735 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.735 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.306 00:28:37.306 real 0m12.420s 00:28:37.306 user 0m24.726s 00:28:37.306 sys 0m3.808s 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.306 ************************************ 00:28:37.306 END TEST nvmf_delete_subsystem 00:28:37.306 ************************************ 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.306 ************************************ 00:28:37.306 START TEST nvmf_host_management 00:28:37.306 ************************************ 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:37.306 * Looking for test storage... 00:28:37.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.306 --rc genhtml_branch_coverage=1 00:28:37.306 --rc genhtml_function_coverage=1 00:28:37.306 --rc genhtml_legend=1 00:28:37.306 --rc geninfo_all_blocks=1 00:28:37.306 --rc geninfo_unexecuted_blocks=1 00:28:37.306 00:28:37.306 ' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.306 --rc genhtml_branch_coverage=1 00:28:37.306 --rc genhtml_function_coverage=1 00:28:37.306 --rc genhtml_legend=1 00:28:37.306 --rc geninfo_all_blocks=1 00:28:37.306 --rc geninfo_unexecuted_blocks=1 00:28:37.306 00:28:37.306 ' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.306 --rc genhtml_branch_coverage=1 00:28:37.306 --rc genhtml_function_coverage=1 00:28:37.306 --rc genhtml_legend=1 00:28:37.306 --rc geninfo_all_blocks=1 00:28:37.306 --rc geninfo_unexecuted_blocks=1 00:28:37.306 00:28:37.306 ' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.306 --rc genhtml_branch_coverage=1 00:28:37.306 --rc genhtml_function_coverage=1 00:28:37.306 --rc genhtml_legend=1 00:28:37.306 --rc geninfo_all_blocks=1 00:28:37.306 --rc geninfo_unexecuted_blocks=1 00:28:37.306 00:28:37.306 ' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.306 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.307 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.224 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.224 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.224 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:28:39.225 00:28:39.225 --- 10.0.0.2 ping statistics --- 00:28:39.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.225 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:28:39.225 00:28:39.225 --- 10.0.0.1 ping statistics --- 00:28:39.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.225 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.225 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=195564 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 195564 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 195564 ']' 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.486 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.486 [2024-12-10 22:59:47.019637] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:39.486 [2024-12-10 22:59:47.020707] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:39.486 [2024-12-10 22:59:47.020766] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.486 [2024-12-10 22:59:47.095263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.486 [2024-12-10 22:59:47.156548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.486 [2024-12-10 22:59:47.156610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.486 [2024-12-10 22:59:47.156624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.486 [2024-12-10 22:59:47.156635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.486 [2024-12-10 22:59:47.156645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.486 [2024-12-10 22:59:47.158467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.486 [2024-12-10 22:59:47.158529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.486 [2024-12-10 22:59:47.158598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:39.486 [2024-12-10 22:59:47.158603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.746 [2024-12-10 22:59:47.256589] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:39.746 [2024-12-10 22:59:47.256807] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:39.746 [2024-12-10 22:59:47.257118] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:39.746 [2024-12-10 22:59:47.257846] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:39.746 [2024-12-10 22:59:47.258062] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:39.746 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.746 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.747 [2024-12-10 22:59:47.307309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.747 Malloc0 00:28:39.747 [2024-12-10 22:59:47.383475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=195609 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 195609 /var/tmp/bdevperf.sock 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 195609 ']' 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:39.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:39.747 { 00:28:39.747 "params": { 00:28:39.747 "name": "Nvme$subsystem", 00:28:39.747 "trtype": "$TEST_TRANSPORT", 00:28:39.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.747 "adrfam": "ipv4", 00:28:39.747 "trsvcid": "$NVMF_PORT", 00:28:39.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.747 "hdgst": ${hdgst:-false}, 00:28:39.747 "ddgst": ${ddgst:-false} 00:28:39.747 }, 00:28:39.747 "method": "bdev_nvme_attach_controller" 00:28:39.747 } 00:28:39.747 EOF 00:28:39.747 )") 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:39.747 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:39.747 "params": { 00:28:39.747 "name": "Nvme0", 00:28:39.747 "trtype": "tcp", 00:28:39.747 "traddr": "10.0.0.2", 00:28:39.747 "adrfam": "ipv4", 00:28:39.747 "trsvcid": "4420", 00:28:39.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.747 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:39.747 "hdgst": false, 00:28:39.747 "ddgst": false 00:28:39.747 }, 00:28:39.747 "method": "bdev_nvme_attach_controller" 00:28:39.747 }' 00:28:39.747 [2024-12-10 22:59:47.468168] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:39.747 [2024-12-10 22:59:47.468258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195609 ] 00:28:40.008 [2024-12-10 22:59:47.538498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.008 [2024-12-10 22:59:47.598657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.268 Running I/O for 10 seconds... 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:40.268 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.529 [2024-12-10 22:59:48.207305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 [2024-12-10 22:59:48.207716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dc00 is same with the state(6) to be set 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.529 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.529 [2024-12-10 22:59:48.215127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.529 [2024-12-10 22:59:48.215170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.529 [2024-12-10 22:59:48.215199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.529 [2024-12-10 22:59:48.215214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.529 [2024-12-10 22:59:48.215228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.529 [2024-12-10 22:59:48.215241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.529 [2024-12-10 22:59:48.215254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.530 [2024-12-10 22:59:48.215267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.215280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0670 is same with the state(6) to be set 00:28:40.530 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.530 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:40.530 [2024-12-10 22:59:48.223794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.223823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.223872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.223888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.223905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.223919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.223934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.223947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.223962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.223975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.223989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.224980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.530 [2024-12-10 22:59:48.224993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.530 [2024-12-10 22:59:48.225008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.531 [2024-12-10 22:59:48.225768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.531 [2024-12-10 22:59:48.225908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c0670 (9): Bad file descriptor 00:28:40.531 [2024-12-10 22:59:48.227023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:40.531 task offset: 81664 on job bdev=Nvme0n1 fails 00:28:40.531 00:28:40.531 Latency(us) 00:28:40.531 [2024-12-10T21:59:48.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.531 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.531 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:40.531 Verification LBA range: start 0x0 length 0x400 00:28:40.531 Nvme0n1 : 0.41 1568.64 98.04 157.36 0.00 36023.91 2463.67 34952.53 00:28:40.531 [2024-12-10T21:59:48.263Z] =================================================================================================================== 00:28:40.531 [2024-12-10T21:59:48.263Z] Total : 1568.64 98.04 157.36 0.00 36023.91 2463.67 34952.53 00:28:40.531 [2024-12-10 22:59:48.228914] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:40.790 [2024-12-10 22:59:48.320700] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:41.729 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 195609 00:28:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (195609) - No such process 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.730 { 00:28:41.730 "params": { 00:28:41.730 "name": "Nvme$subsystem", 00:28:41.730 "trtype": "$TEST_TRANSPORT", 00:28:41.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.730 "adrfam": "ipv4", 00:28:41.730 "trsvcid": "$NVMF_PORT", 00:28:41.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.730 "hdgst": ${hdgst:-false}, 00:28:41.730 "ddgst": ${ddgst:-false} 00:28:41.730 }, 00:28:41.730 "method": "bdev_nvme_attach_controller" 00:28:41.730 } 00:28:41.730 EOF 00:28:41.730 )") 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:41.730 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.730 "params": { 00:28:41.730 "name": "Nvme0", 00:28:41.730 "trtype": "tcp", 00:28:41.730 "traddr": "10.0.0.2", 00:28:41.730 "adrfam": "ipv4", 00:28:41.730 "trsvcid": "4420", 00:28:41.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:41.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:41.730 "hdgst": false, 00:28:41.730 "ddgst": false 00:28:41.730 }, 00:28:41.730 "method": "bdev_nvme_attach_controller" 00:28:41.730 }' 00:28:41.730 [2024-12-10 22:59:49.273780] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:41.730 [2024-12-10 22:59:49.273890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195883 ] 00:28:41.730 [2024-12-10 22:59:49.344091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.730 [2024-12-10 22:59:49.403889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.990 Running I/O for 1 seconds... 00:28:43.185 1600.00 IOPS, 100.00 MiB/s 00:28:43.185 Latency(us) 00:28:43.185 [2024-12-10T21:59:50.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.185 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.185 Verification LBA range: start 0x0 length 0x400 00:28:43.185 Nvme0n1 : 1.02 1623.69 101.48 0.00 0.00 38787.95 5582.70 34564.17 00:28:43.185 [2024-12-10T21:59:50.917Z] =================================================================================================================== 00:28:43.185 [2024-12-10T21:59:50.917Z] Total : 1623.69 101.48 0.00 0.00 38787.95 5582.70 34564.17 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.185 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.185 rmmod nvme_tcp 00:28:43.185 rmmod nvme_fabrics 00:28:43.446 rmmod nvme_keyring 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 195564 ']' 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 195564 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 195564 ']' 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 195564 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 195564 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 195564' 00:28:43.446 killing process with pid 195564 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 195564 00:28:43.446 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 195564 00:28:43.707 [2024-12-10 22:59:51.212259] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.708 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:45.618 00:28:45.618 real 0m8.785s 00:28:45.618 user 0m17.074s 00:28:45.618 sys 0m3.816s 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.618 ************************************ 00:28:45.618 END TEST nvmf_host_management 00:28:45.618 ************************************ 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:45.618 ************************************ 00:28:45.618 START TEST nvmf_lvol 00:28:45.618 ************************************ 00:28:45.618 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:45.877 * Looking for test storage... 00:28:45.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:45.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.877 --rc genhtml_branch_coverage=1 00:28:45.877 --rc genhtml_function_coverage=1 00:28:45.877 --rc genhtml_legend=1 00:28:45.877 --rc geninfo_all_blocks=1 00:28:45.877 --rc geninfo_unexecuted_blocks=1 00:28:45.877 00:28:45.877 ' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:45.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.877 --rc genhtml_branch_coverage=1 00:28:45.877 --rc genhtml_function_coverage=1 00:28:45.877 --rc genhtml_legend=1 00:28:45.877 --rc geninfo_all_blocks=1 00:28:45.877 --rc geninfo_unexecuted_blocks=1 00:28:45.877 00:28:45.877 ' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:45.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.877 --rc genhtml_branch_coverage=1 00:28:45.877 --rc genhtml_function_coverage=1 00:28:45.877 --rc genhtml_legend=1 00:28:45.877 --rc geninfo_all_blocks=1 00:28:45.877 --rc geninfo_unexecuted_blocks=1 00:28:45.877 00:28:45.877 ' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:45.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.877 --rc genhtml_branch_coverage=1 00:28:45.877 --rc genhtml_function_coverage=1 00:28:45.877 --rc genhtml_legend=1 00:28:45.877 --rc geninfo_all_blocks=1 00:28:45.877 --rc geninfo_unexecuted_blocks=1 00:28:45.877 00:28:45.877 ' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.877 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.878 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:48.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:48.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:48.412 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:48.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:48.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:48.413 00:28:48.413 --- 10.0.0.2 ping statistics --- 00:28:48.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.413 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:48.413 00:28:48.413 --- 10.0.0.1 ping statistics --- 00:28:48.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.413 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=198079 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 198079 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 198079 ']' 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.413 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:48.413 [2024-12-10 22:59:55.966832] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.413 [2024-12-10 22:59:55.967876] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:48.413 [2024-12-10 22:59:55.967929] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.413 [2024-12-10 22:59:56.041677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.413 [2024-12-10 22:59:56.101146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.413 [2024-12-10 22:59:56.101191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.413 [2024-12-10 22:59:56.101219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.413 [2024-12-10 22:59:56.101230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.413 [2024-12-10 22:59:56.101240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.413 [2024-12-10 22:59:56.102637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.413 [2024-12-10 22:59:56.102677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.413 [2024-12-10 22:59:56.102682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.672 [2024-12-10 22:59:56.191363] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.672 [2024-12-10 22:59:56.191599] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:48.672 [2024-12-10 22:59:56.191630] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:48.672 [2024-12-10 22:59:56.191855] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:48.672 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.672 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:48.672 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.672 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.672 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:48.672 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.672 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:48.931 [2024-12-10 22:59:56.499372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.931 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:49.191 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:49.191 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:49.450 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:49.450 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:49.710 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:49.970 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0f418260-3353-45c2-9b67-eccb1a731b8c 00:28:49.970 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0f418260-3353-45c2-9b67-eccb1a731b8c lvol 20 00:28:50.537 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0e636ab5-348a-46dd-9f66-59dc28f7d871 00:28:50.537 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:50.537 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0e636ab5-348a-46dd-9f66-59dc28f7d871 00:28:50.797 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:51.057 [2024-12-10 22:59:58.767493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.316 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:51.574 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=198501 00:28:51.574 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:51.574 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:52.508 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0e636ab5-348a-46dd-9f66-59dc28f7d871 MY_SNAPSHOT 00:28:52.766 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e226b4a7-6947-4513-9e59-3840e3edf996 00:28:52.767 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0e636ab5-348a-46dd-9f66-59dc28f7d871 30 00:28:53.025 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e226b4a7-6947-4513-9e59-3840e3edf996 MY_CLONE 00:28:53.593 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d7ceafa5-ec8d-4239-8862-2d2df2d014b0 00:28:53.593 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d7ceafa5-ec8d-4239-8862-2d2df2d014b0 00:28:54.159 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 198501 00:29:02.299 Initializing NVMe Controllers 00:29:02.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:02.299 Controller IO queue size 128, less than required. 00:29:02.300 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:02.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:02.300 Initialization complete. Launching workers. 00:29:02.300 ======================================================== 00:29:02.300 Latency(us) 00:29:02.300 Device Information : IOPS MiB/s Average min max 00:29:02.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10451.50 40.83 12253.05 6668.25 75381.52 00:29:02.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10384.60 40.56 12328.64 4585.11 75108.46 00:29:02.300 ======================================================== 00:29:02.300 Total : 20836.10 81.39 12290.72 4585.11 75381.52 00:29:02.300 00:29:02.300 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.300 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0e636ab5-348a-46dd-9f66-59dc28f7d871 00:29:02.300 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f418260-3353-45c2-9b67-eccb1a731b8c 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.869 rmmod nvme_tcp 00:29:02.869 rmmod nvme_fabrics 00:29:02.869 rmmod nvme_keyring 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 198079 ']' 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 198079 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 198079 ']' 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 198079 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 198079 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 198079' 00:29:02.869 killing process with pid 198079 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 198079 00:29:02.869 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 198079 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.130 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.667 00:29:05.667 real 0m19.429s 00:29:05.667 user 0m56.853s 00:29:05.667 sys 0m7.732s 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:05.667 ************************************ 00:29:05.667 END TEST nvmf_lvol 00:29:05.667 ************************************ 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:05.667 ************************************ 00:29:05.667 START TEST nvmf_lvs_grow 00:29:05.667 ************************************ 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:05.667 * Looking for test storage... 00:29:05.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:05.667 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.668 --rc genhtml_branch_coverage=1 00:29:05.668 --rc genhtml_function_coverage=1 00:29:05.668 --rc genhtml_legend=1 00:29:05.668 --rc geninfo_all_blocks=1 00:29:05.668 --rc geninfo_unexecuted_blocks=1 00:29:05.668 00:29:05.668 ' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.668 --rc genhtml_branch_coverage=1 00:29:05.668 --rc genhtml_function_coverage=1 00:29:05.668 --rc genhtml_legend=1 00:29:05.668 --rc geninfo_all_blocks=1 00:29:05.668 --rc geninfo_unexecuted_blocks=1 00:29:05.668 00:29:05.668 ' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.668 --rc genhtml_branch_coverage=1 00:29:05.668 --rc genhtml_function_coverage=1 00:29:05.668 --rc genhtml_legend=1 00:29:05.668 --rc geninfo_all_blocks=1 00:29:05.668 --rc geninfo_unexecuted_blocks=1 00:29:05.668 00:29:05.668 ' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.668 --rc genhtml_branch_coverage=1 00:29:05.668 --rc genhtml_function_coverage=1 00:29:05.668 --rc genhtml_legend=1 00:29:05.668 --rc geninfo_all_blocks=1 00:29:05.668 --rc geninfo_unexecuted_blocks=1 00:29:05.668 00:29:05.668 ' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.668 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:07.572 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:07.573 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:07.573 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:07.573 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:07.573 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.573 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:29:07.573 00:29:07.573 --- 10.0.0.2 ping statistics --- 00:29:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.573 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:29:07.573 00:29:07.573 --- 10.0.0.1 ping statistics --- 00:29:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.573 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=202382 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 202382 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 202382 ']' 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.573 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.573 [2024-12-10 23:00:15.152805] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:07.573 [2024-12-10 23:00:15.153870] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:07.573 [2024-12-10 23:00:15.153937] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.573 [2024-12-10 23:00:15.229637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.573 [2024-12-10 23:00:15.285260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.573 [2024-12-10 23:00:15.285315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.573 [2024-12-10 23:00:15.285344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.573 [2024-12-10 23:00:15.285356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.573 [2024-12-10 23:00:15.285365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.573 [2024-12-10 23:00:15.285975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.831 [2024-12-10 23:00:15.374238] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:07.831 [2024-12-10 23:00:15.374510] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:07.831 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.831 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:07.831 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.831 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.831 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.831 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.831 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:08.089 [2024-12-10 23:00:15.674593] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:08.089 ************************************ 00:29:08.089 START TEST lvs_grow_clean 00:29:08.089 ************************************ 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:08.089 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:08.350 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:08.350 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:08.610 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:08.610 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:08.610 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:08.869 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:08.869 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:08.869 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e lvol 150 00:29:09.129 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0e5a8891-06df-4f05-bfb1-9dfe8ee885dd 00:29:09.129 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:09.129 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:09.424 [2024-12-10 23:00:17.118467] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:09.424 [2024-12-10 23:00:17.118604] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:09.424 true 00:29:09.710 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:09.710 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:09.710 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:09.710 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:09.970 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0e5a8891-06df-4f05-bfb1-9dfe8ee885dd 00:29:10.230 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.491 [2024-12-10 23:00:18.214932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.751 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=202822 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 202822 /var/tmp/bdevperf.sock 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 202822 ']' 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:11.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.010 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.010 [2024-12-10 23:00:18.540698] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:11.010 [2024-12-10 23:00:18.540799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202822 ] 00:29:11.010 [2024-12-10 23:00:18.609278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.010 [2024-12-10 23:00:18.668312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:11.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:11.526 Nvme0n1 00:29:11.526 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:11.784 [ 00:29:11.784 { 00:29:11.784 "name": "Nvme0n1", 00:29:11.784 "aliases": [ 00:29:11.784 "0e5a8891-06df-4f05-bfb1-9dfe8ee885dd" 00:29:11.784 ], 00:29:11.784 "product_name": "NVMe disk", 00:29:11.784 "block_size": 4096, 00:29:11.784 "num_blocks": 38912, 00:29:11.784 "uuid": "0e5a8891-06df-4f05-bfb1-9dfe8ee885dd", 00:29:11.784 "numa_id": 0, 00:29:11.784 "assigned_rate_limits": { 00:29:11.784 "rw_ios_per_sec": 0, 00:29:11.784 "rw_mbytes_per_sec": 0, 00:29:11.784 "r_mbytes_per_sec": 0, 00:29:11.784 "w_mbytes_per_sec": 0 00:29:11.784 }, 00:29:11.784 "claimed": false, 00:29:11.784 "zoned": false, 00:29:11.784 "supported_io_types": { 00:29:11.784 "read": true, 00:29:11.784 "write": true, 00:29:11.784 "unmap": true, 00:29:11.784 "flush": true, 00:29:11.784 "reset": true, 00:29:11.784 "nvme_admin": true, 00:29:11.784 "nvme_io": true, 00:29:11.784 "nvme_io_md": false, 00:29:11.784 "write_zeroes": true, 00:29:11.784 "zcopy": false, 00:29:11.784 "get_zone_info": false, 00:29:11.784 "zone_management": false, 00:29:11.784 "zone_append": false, 00:29:11.784 "compare": true, 00:29:11.784 "compare_and_write": true, 00:29:11.784 "abort": true, 00:29:11.784 "seek_hole": false, 00:29:11.784 "seek_data": false, 00:29:11.784 "copy": true, 00:29:11.784 "nvme_iov_md": false 00:29:11.784 }, 00:29:11.784 "memory_domains": [ 00:29:11.784 { 00:29:11.784 "dma_device_id": "system", 00:29:11.784 "dma_device_type": 1 00:29:11.784 } 00:29:11.784 ], 00:29:11.784 "driver_specific": { 00:29:11.784 "nvme": [ 00:29:11.784 { 00:29:11.784 "trid": { 00:29:11.784 "trtype": "TCP", 00:29:11.784 "adrfam": "IPv4", 00:29:11.784 "traddr": "10.0.0.2", 00:29:11.784 "trsvcid": "4420", 00:29:11.784 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:11.784 }, 00:29:11.784 "ctrlr_data": { 00:29:11.784 "cntlid": 1, 00:29:11.784 "vendor_id": "0x8086", 00:29:11.784 "model_number": "SPDK bdev Controller", 00:29:11.784 "serial_number": "SPDK0", 00:29:11.784 "firmware_revision": "25.01", 00:29:11.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.784 "oacs": { 00:29:11.784 "security": 0, 00:29:11.784 "format": 0, 00:29:11.784 "firmware": 0, 00:29:11.784 "ns_manage": 0 00:29:11.784 }, 00:29:11.784 "multi_ctrlr": true, 00:29:11.784 "ana_reporting": false 00:29:11.784 }, 00:29:11.784 "vs": { 00:29:11.784 "nvme_version": "1.3" 00:29:11.784 }, 00:29:11.784 "ns_data": { 00:29:11.784 "id": 1, 00:29:11.784 "can_share": true 00:29:11.784 } 00:29:11.784 } 00:29:11.784 ], 00:29:11.784 "mp_policy": "active_passive" 00:29:11.784 } 00:29:11.784 } 00:29:11.784 ] 00:29:11.784 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=202957 00:29:11.784 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:11.784 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:12.044 Running I/O for 10 seconds... 00:29:12.980 Latency(us) 00:29:12.980 [2024-12-10T22:00:20.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.980 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:29:12.980 [2024-12-10T22:00:20.712Z] =================================================================================================================== 00:29:12.980 [2024-12-10T22:00:20.712Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:29:12.980 00:29:13.914 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:13.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.914 Nvme0n1 : 2.00 15145.00 59.16 0.00 0.00 0.00 0.00 0.00 00:29:13.914 [2024-12-10T22:00:21.646Z] =================================================================================================================== 00:29:13.914 [2024-12-10T22:00:21.646Z] Total : 15145.00 59.16 0.00 0.00 0.00 0.00 0.00 00:29:13.914 00:29:14.172 true 00:29:14.172 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:14.172 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:14.430 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:14.430 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:14.430 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 202957 00:29:14.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.996 Nvme0n1 : 3.00 15282.33 59.70 0.00 0.00 0.00 0.00 0.00 00:29:14.996 [2024-12-10T22:00:22.728Z] =================================================================================================================== 00:29:14.996 [2024-12-10T22:00:22.728Z] Total : 15282.33 59.70 0.00 0.00 0.00 0.00 0.00 00:29:14.996 00:29:15.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.930 Nvme0n1 : 4.00 15383.00 60.09 0.00 0.00 0.00 0.00 0.00 00:29:15.930 [2024-12-10T22:00:23.662Z] =================================================================================================================== 00:29:15.930 [2024-12-10T22:00:23.662Z] Total : 15383.00 60.09 0.00 0.00 0.00 0.00 0.00 00:29:15.930 00:29:16.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.864 Nvme0n1 : 5.00 15468.60 60.42 0.00 0.00 0.00 0.00 0.00 00:29:16.864 [2024-12-10T22:00:24.596Z] =================================================================================================================== 00:29:16.864 [2024-12-10T22:00:24.596Z] Total : 15468.60 60.42 0.00 0.00 0.00 0.00 0.00 00:29:16.864 00:29:18.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.237 Nvme0n1 : 6.00 15536.33 60.69 0.00 0.00 0.00 0.00 0.00 00:29:18.237 [2024-12-10T22:00:25.969Z] =================================================================================================================== 00:29:18.237 [2024-12-10T22:00:25.969Z] Total : 15536.33 60.69 0.00 0.00 0.00 0.00 0.00 00:29:18.237 00:29:19.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.172 Nvme0n1 : 7.00 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:29:19.172 [2024-12-10T22:00:26.904Z] =================================================================================================================== 00:29:19.172 [2024-12-10T22:00:26.904Z] Total : 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:29:19.172 00:29:20.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.106 Nvme0n1 : 8.00 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:29:20.106 [2024-12-10T22:00:27.838Z] =================================================================================================================== 00:29:20.106 [2024-12-10T22:00:27.838Z] Total : 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:29:20.106 00:29:21.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.040 Nvme0n1 : 9.00 15649.22 61.13 0.00 0.00 0.00 0.00 0.00 00:29:21.040 [2024-12-10T22:00:28.772Z] =================================================================================================================== 00:29:21.040 [2024-12-10T22:00:28.772Z] Total : 15649.22 61.13 0.00 0.00 0.00 0.00 0.00 00:29:21.040 00:29:21.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.982 Nvme0n1 : 10.00 15671.80 61.22 0.00 0.00 0.00 0.00 0.00 00:29:21.982 [2024-12-10T22:00:29.714Z] =================================================================================================================== 00:29:21.982 [2024-12-10T22:00:29.714Z] Total : 15671.80 61.22 0.00 0.00 0.00 0.00 0.00 00:29:21.982 00:29:21.982 00:29:21.982 Latency(us) 00:29:21.982 [2024-12-10T22:00:29.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.982 Nvme0n1 : 10.01 15673.06 61.22 0.00 0.00 8162.16 6116.69 17961.72 00:29:21.982 [2024-12-10T22:00:29.714Z] =================================================================================================================== 00:29:21.982 [2024-12-10T22:00:29.714Z] Total : 15673.06 61.22 0.00 0.00 8162.16 6116.69 17961.72 00:29:21.982 { 00:29:21.982 "results": [ 00:29:21.982 { 00:29:21.982 "job": "Nvme0n1", 00:29:21.982 "core_mask": "0x2", 00:29:21.982 "workload": "randwrite", 00:29:21.982 "status": "finished", 00:29:21.982 "queue_depth": 128, 00:29:21.982 "io_size": 4096, 00:29:21.982 "runtime": 10.00736, 00:29:21.982 "iops": 15673.064624436414, 00:29:21.982 "mibps": 61.22290868920474, 00:29:21.982 "io_failed": 0, 00:29:21.982 "io_timeout": 0, 00:29:21.982 "avg_latency_us": 8162.15981203549, 00:29:21.982 "min_latency_us": 6116.693333333334, 00:29:21.982 "max_latency_us": 17961.71851851852 00:29:21.982 } 00:29:21.982 ], 00:29:21.982 "core_count": 1 00:29:21.982 } 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 202822 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 202822 ']' 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 202822 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202822 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202822' 00:29:21.982 killing process with pid 202822 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 202822 00:29:21.982 Received shutdown signal, test time was about 10.000000 seconds 00:29:21.982 00:29:21.982 Latency(us) 00:29:21.982 [2024-12-10T22:00:29.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.982 [2024-12-10T22:00:29.714Z] =================================================================================================================== 00:29:21.982 [2024-12-10T22:00:29.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.982 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 202822 00:29:22.241 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:22.499 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.065 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:23.065 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:23.322 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:23.322 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:23.322 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:23.580 [2024-12-10 23:00:31.082523] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:23.580 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:23.838 request: 00:29:23.838 { 00:29:23.838 "uuid": "572ca6b7-7be9-4270-8a48-d04f8d6fd81e", 00:29:23.838 "method": "bdev_lvol_get_lvstores", 00:29:23.838 "req_id": 1 00:29:23.838 } 00:29:23.838 Got JSON-RPC error response 00:29:23.838 response: 00:29:23.838 { 00:29:23.838 "code": -19, 00:29:23.838 "message": "No such device" 00:29:23.838 } 00:29:23.838 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:23.838 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:23.838 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:23.838 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:23.838 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:24.096 aio_bdev 00:29:24.096 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0e5a8891-06df-4f05-bfb1-9dfe8ee885dd 00:29:24.096 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0e5a8891-06df-4f05-bfb1-9dfe8ee885dd 00:29:24.096 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:24.096 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:24.096 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:24.096 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:24.096 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:24.354 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0e5a8891-06df-4f05-bfb1-9dfe8ee885dd -t 2000 00:29:24.612 [ 00:29:24.612 { 00:29:24.612 "name": "0e5a8891-06df-4f05-bfb1-9dfe8ee885dd", 00:29:24.612 "aliases": [ 00:29:24.612 "lvs/lvol" 00:29:24.612 ], 00:29:24.612 "product_name": "Logical Volume", 00:29:24.612 "block_size": 4096, 00:29:24.612 "num_blocks": 38912, 00:29:24.612 "uuid": "0e5a8891-06df-4f05-bfb1-9dfe8ee885dd", 00:29:24.612 "assigned_rate_limits": { 00:29:24.612 "rw_ios_per_sec": 0, 00:29:24.612 "rw_mbytes_per_sec": 0, 00:29:24.612 "r_mbytes_per_sec": 0, 00:29:24.612 "w_mbytes_per_sec": 0 00:29:24.612 }, 00:29:24.612 "claimed": false, 00:29:24.612 "zoned": false, 00:29:24.612 "supported_io_types": { 00:29:24.612 "read": true, 00:29:24.612 "write": true, 00:29:24.612 "unmap": true, 00:29:24.612 "flush": false, 00:29:24.612 "reset": true, 00:29:24.612 "nvme_admin": false, 00:29:24.612 "nvme_io": false, 00:29:24.612 "nvme_io_md": false, 00:29:24.612 "write_zeroes": true, 00:29:24.612 "zcopy": false, 00:29:24.612 "get_zone_info": false, 00:29:24.612 "zone_management": false, 00:29:24.612 "zone_append": false, 00:29:24.612 "compare": false, 00:29:24.612 "compare_and_write": false, 00:29:24.612 "abort": false, 00:29:24.612 "seek_hole": true, 00:29:24.612 "seek_data": true, 00:29:24.612 "copy": false, 00:29:24.612 "nvme_iov_md": false 00:29:24.612 }, 00:29:24.612 "driver_specific": { 00:29:24.612 "lvol": { 00:29:24.612 "lvol_store_uuid": "572ca6b7-7be9-4270-8a48-d04f8d6fd81e", 00:29:24.612 "base_bdev": "aio_bdev", 00:29:24.612 "thin_provision": false, 00:29:24.612 "num_allocated_clusters": 38, 00:29:24.612 "snapshot": false, 00:29:24.612 "clone": false, 00:29:24.612 "esnap_clone": false 00:29:24.612 } 00:29:24.612 } 00:29:24.612 } 00:29:24.612 ] 00:29:24.612 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:24.612 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:24.612 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:24.870 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:24.870 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:24.870 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:25.128 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:25.128 23:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0e5a8891-06df-4f05-bfb1-9dfe8ee885dd 00:29:25.386 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 572ca6b7-7be9-4270-8a48-d04f8d6fd81e 00:29:25.644 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:25.902 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:25.902 00:29:25.902 real 0m17.907s 00:29:25.902 user 0m17.452s 00:29:25.902 sys 0m1.874s 00:29:25.902 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.902 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:25.902 ************************************ 00:29:25.902 END TEST lvs_grow_clean 00:29:25.902 ************************************ 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:26.160 ************************************ 00:29:26.160 START TEST lvs_grow_dirty 00:29:26.160 ************************************ 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:26.160 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:26.418 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:26.418 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:26.676 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:26.676 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:26.676 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:26.934 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:26.934 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:26.934 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a lvol 150 00:29:27.192 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2a6b3eb-b555-41ad-9a9e-f5c58110661c 00:29:27.192 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:27.192 23:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:27.450 [2024-12-10 23:00:35.062475] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:27.450 [2024-12-10 23:00:35.062618] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:27.450 true 00:29:27.450 23:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:27.450 23:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:27.708 23:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:27.708 23:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:27.966 23:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2a6b3eb-b555-41ad-9a9e-f5c58110661c 00:29:28.224 23:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:28.482 [2024-12-10 23:00:36.174795] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.482 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=204975 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 204975 /var/tmp/bdevperf.sock 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 204975 ']' 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:28.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.740 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:28.998 [2024-12-10 23:00:36.508646] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:28.998 [2024-12-10 23:00:36.508743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204975 ] 00:29:28.998 [2024-12-10 23:00:36.579019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.998 [2024-12-10 23:00:36.641658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.256 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.256 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:29.256 23:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:29.514 Nvme0n1 00:29:29.514 23:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:29.772 [ 00:29:29.772 { 00:29:29.772 "name": "Nvme0n1", 00:29:29.772 "aliases": [ 00:29:29.772 "f2a6b3eb-b555-41ad-9a9e-f5c58110661c" 00:29:29.772 ], 00:29:29.772 "product_name": "NVMe disk", 00:29:29.772 "block_size": 4096, 00:29:29.772 "num_blocks": 38912, 00:29:29.772 "uuid": "f2a6b3eb-b555-41ad-9a9e-f5c58110661c", 00:29:29.772 "numa_id": 0, 00:29:29.772 "assigned_rate_limits": { 00:29:29.772 "rw_ios_per_sec": 0, 00:29:29.772 "rw_mbytes_per_sec": 0, 00:29:29.772 "r_mbytes_per_sec": 0, 00:29:29.772 "w_mbytes_per_sec": 0 00:29:29.772 }, 00:29:29.772 "claimed": false, 00:29:29.772 "zoned": false, 00:29:29.772 "supported_io_types": { 00:29:29.772 "read": true, 00:29:29.772 "write": true, 00:29:29.772 "unmap": true, 00:29:29.772 "flush": true, 00:29:29.772 "reset": true, 00:29:29.772 "nvme_admin": true, 00:29:29.772 "nvme_io": true, 00:29:29.772 "nvme_io_md": false, 00:29:29.772 "write_zeroes": true, 00:29:29.772 "zcopy": false, 00:29:29.772 "get_zone_info": false, 00:29:29.772 "zone_management": false, 00:29:29.772 "zone_append": false, 00:29:29.772 "compare": true, 00:29:29.772 "compare_and_write": true, 00:29:29.772 "abort": true, 00:29:29.772 "seek_hole": false, 00:29:29.772 "seek_data": false, 00:29:29.772 "copy": true, 00:29:29.772 "nvme_iov_md": false 00:29:29.772 }, 00:29:29.772 "memory_domains": [ 00:29:29.772 { 00:29:29.772 "dma_device_id": "system", 00:29:29.772 "dma_device_type": 1 00:29:29.772 } 00:29:29.772 ], 00:29:29.772 "driver_specific": { 00:29:29.772 "nvme": [ 00:29:29.772 { 00:29:29.772 "trid": { 00:29:29.772 "trtype": "TCP", 00:29:29.772 "adrfam": "IPv4", 00:29:29.772 "traddr": "10.0.0.2", 00:29:29.772 "trsvcid": "4420", 00:29:29.772 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.772 }, 00:29:29.772 "ctrlr_data": { 00:29:29.772 "cntlid": 1, 00:29:29.772 "vendor_id": "0x8086", 00:29:29.772 "model_number": "SPDK bdev Controller", 00:29:29.772 "serial_number": "SPDK0", 00:29:29.772 "firmware_revision": "25.01", 00:29:29.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.772 "oacs": { 00:29:29.772 "security": 0, 00:29:29.772 "format": 0, 00:29:29.772 "firmware": 0, 00:29:29.772 "ns_manage": 0 00:29:29.772 }, 00:29:29.772 "multi_ctrlr": true, 00:29:29.772 "ana_reporting": false 00:29:29.772 }, 00:29:29.772 "vs": { 00:29:29.772 "nvme_version": "1.3" 00:29:29.772 }, 00:29:29.772 "ns_data": { 00:29:29.772 "id": 1, 00:29:29.772 "can_share": true 00:29:29.772 } 00:29:29.772 } 00:29:29.772 ], 00:29:29.772 "mp_policy": "active_passive" 00:29:29.772 } 00:29:29.772 } 00:29:29.772 ] 00:29:29.772 23:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=204993 00:29:29.772 23:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:29.772 23:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:30.030 Running I/O for 10 seconds... 00:29:30.964 Latency(us) 00:29:30.964 [2024-12-10T22:00:38.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.964 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:29:30.964 [2024-12-10T22:00:38.696Z] =================================================================================================================== 00:29:30.964 [2024-12-10T22:00:38.696Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:29:30.964 00:29:31.898 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:31.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.898 Nvme0n1 : 2.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:29:31.898 [2024-12-10T22:00:39.630Z] =================================================================================================================== 00:29:31.898 [2024-12-10T22:00:39.630Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:29:31.898 00:29:32.156 true 00:29:32.156 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:32.156 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:32.414 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:32.414 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:32.414 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 204993 00:29:32.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.979 Nvme0n1 : 3.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:29:32.979 [2024-12-10T22:00:40.711Z] =================================================================================================================== 00:29:32.979 [2024-12-10T22:00:40.711Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:29:32.979 00:29:33.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.912 Nvme0n1 : 4.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:29:33.912 [2024-12-10T22:00:41.644Z] =================================================================================================================== 00:29:33.912 [2024-12-10T22:00:41.644Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:29:33.912 00:29:34.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.892 Nvme0n1 : 5.00 15316.20 59.83 0.00 0.00 0.00 0.00 0.00 00:29:34.892 [2024-12-10T22:00:42.624Z] =================================================================================================================== 00:29:34.892 [2024-12-10T22:00:42.624Z] Total : 15316.20 59.83 0.00 0.00 0.00 0.00 0.00 00:29:34.892 00:29:35.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.833 Nvme0n1 : 6.00 15388.17 60.11 0.00 0.00 0.00 0.00 0.00 00:29:35.833 [2024-12-10T22:00:43.565Z] =================================================================================================================== 00:29:35.833 [2024-12-10T22:00:43.565Z] Total : 15388.17 60.11 0.00 0.00 0.00 0.00 0.00 00:29:35.833 00:29:37.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.216 Nvme0n1 : 7.00 15457.71 60.38 0.00 0.00 0.00 0.00 0.00 00:29:37.216 [2024-12-10T22:00:44.948Z] =================================================================================================================== 00:29:37.216 [2024-12-10T22:00:44.948Z] Total : 15457.71 60.38 0.00 0.00 0.00 0.00 0.00 00:29:37.216 00:29:38.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.154 Nvme0n1 : 8.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:29:38.154 [2024-12-10T22:00:45.886Z] =================================================================================================================== 00:29:38.154 [2024-12-10T22:00:45.887Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:29:38.155 00:29:39.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.095 Nvme0n1 : 9.00 15522.22 60.63 0.00 0.00 0.00 0.00 0.00 00:29:39.095 [2024-12-10T22:00:46.827Z] =================================================================================================================== 00:29:39.095 [2024-12-10T22:00:46.827Z] Total : 15522.22 60.63 0.00 0.00 0.00 0.00 0.00 00:29:39.095 00:29:40.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.029 Nvme0n1 : 10.00 15563.90 60.80 0.00 0.00 0.00 0.00 0.00 00:29:40.029 [2024-12-10T22:00:47.761Z] =================================================================================================================== 00:29:40.029 [2024-12-10T22:00:47.761Z] Total : 15563.90 60.80 0.00 0.00 0.00 0.00 0.00 00:29:40.029 00:29:40.029 00:29:40.029 Latency(us) 00:29:40.029 [2024-12-10T22:00:47.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.030 Nvme0n1 : 10.01 15567.85 60.81 0.00 0.00 8216.88 4708.88 17961.72 00:29:40.030 [2024-12-10T22:00:47.762Z] =================================================================================================================== 00:29:40.030 [2024-12-10T22:00:47.762Z] Total : 15567.85 60.81 0.00 0.00 8216.88 4708.88 17961.72 00:29:40.030 { 00:29:40.030 "results": [ 00:29:40.030 { 00:29:40.030 "job": "Nvme0n1", 00:29:40.030 "core_mask": "0x2", 00:29:40.030 "workload": "randwrite", 00:29:40.030 "status": "finished", 00:29:40.030 "queue_depth": 128, 00:29:40.030 "io_size": 4096, 00:29:40.030 "runtime": 10.005685, 00:29:40.030 "iops": 15567.849677458365, 00:29:40.030 "mibps": 60.81191280257174, 00:29:40.030 "io_failed": 0, 00:29:40.030 "io_timeout": 0, 00:29:40.030 "avg_latency_us": 8216.880704661213, 00:29:40.030 "min_latency_us": 4708.882962962963, 00:29:40.030 "max_latency_us": 17961.71851851852 00:29:40.030 } 00:29:40.030 ], 00:29:40.030 "core_count": 1 00:29:40.030 } 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 204975 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 204975 ']' 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 204975 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 204975 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 204975' 00:29:40.030 killing process with pid 204975 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 204975 00:29:40.030 Received shutdown signal, test time was about 10.000000 seconds 00:29:40.030 00:29:40.030 Latency(us) 00:29:40.030 [2024-12-10T22:00:47.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.030 [2024-12-10T22:00:47.762Z] =================================================================================================================== 00:29:40.030 [2024-12-10T22:00:47.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.030 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 204975 00:29:40.288 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:40.548 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:40.808 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:40.808 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 202382 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 202382 00:29:41.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 202382 Killed "${NVMF_APP[@]}" "$@" 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=206314 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 206314 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 206314 ']' 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.069 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.069 [2024-12-10 23:00:48.738711] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:41.069 [2024-12-10 23:00:48.739803] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:41.069 [2024-12-10 23:00:48.739887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.330 [2024-12-10 23:00:48.816339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.330 [2024-12-10 23:00:48.874256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.330 [2024-12-10 23:00:48.874311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.330 [2024-12-10 23:00:48.874340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.330 [2024-12-10 23:00:48.874351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.330 [2024-12-10 23:00:48.874360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.330 [2024-12-10 23:00:48.874954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.330 [2024-12-10 23:00:48.967070] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:41.330 [2024-12-10 23:00:48.967354] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:41.330 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.330 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:41.330 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.330 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.330 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.330 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.330 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:41.590 [2024-12-10 23:00:49.277773] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:41.590 [2024-12-10 23:00:49.277929] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:41.590 [2024-12-10 23:00:49.277981] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f2a6b3eb-b555-41ad-9a9e-f5c58110661c 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f2a6b3eb-b555-41ad-9a9e-f5c58110661c 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:41.590 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:41.850 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2a6b3eb-b555-41ad-9a9e-f5c58110661c -t 2000 00:29:42.110 [ 00:29:42.110 { 00:29:42.110 "name": "f2a6b3eb-b555-41ad-9a9e-f5c58110661c", 00:29:42.110 "aliases": [ 00:29:42.110 "lvs/lvol" 00:29:42.110 ], 00:29:42.110 "product_name": "Logical Volume", 00:29:42.110 "block_size": 4096, 00:29:42.110 "num_blocks": 38912, 00:29:42.110 "uuid": "f2a6b3eb-b555-41ad-9a9e-f5c58110661c", 00:29:42.110 "assigned_rate_limits": { 00:29:42.110 "rw_ios_per_sec": 0, 00:29:42.110 "rw_mbytes_per_sec": 0, 00:29:42.110 "r_mbytes_per_sec": 0, 00:29:42.110 "w_mbytes_per_sec": 0 00:29:42.110 }, 00:29:42.110 "claimed": false, 00:29:42.110 "zoned": false, 00:29:42.110 "supported_io_types": { 00:29:42.110 "read": true, 00:29:42.110 "write": true, 00:29:42.110 "unmap": true, 00:29:42.110 "flush": false, 00:29:42.110 "reset": true, 00:29:42.110 "nvme_admin": false, 00:29:42.110 "nvme_io": false, 00:29:42.110 "nvme_io_md": false, 00:29:42.110 "write_zeroes": true, 00:29:42.110 "zcopy": false, 00:29:42.110 "get_zone_info": false, 00:29:42.110 "zone_management": false, 00:29:42.110 "zone_append": false, 00:29:42.110 "compare": false, 00:29:42.110 "compare_and_write": false, 00:29:42.110 "abort": false, 00:29:42.110 "seek_hole": true, 00:29:42.110 "seek_data": true, 00:29:42.110 "copy": false, 00:29:42.110 "nvme_iov_md": false 00:29:42.110 }, 00:29:42.110 "driver_specific": { 00:29:42.110 "lvol": { 00:29:42.110 "lvol_store_uuid": "90c0b84a-aa1a-450b-8cb2-b81014b91d5a", 00:29:42.110 "base_bdev": "aio_bdev", 00:29:42.110 "thin_provision": false, 00:29:42.110 "num_allocated_clusters": 38, 00:29:42.110 "snapshot": false, 00:29:42.110 "clone": false, 00:29:42.110 "esnap_clone": false 00:29:42.110 } 00:29:42.110 } 00:29:42.110 } 00:29:42.110 ] 00:29:42.368 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:42.368 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:42.368 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:42.628 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:42.628 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:42.628 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:42.888 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:42.888 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:43.147 [2024-12-10 23:00:50.675462] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:43.147 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:43.406 request: 00:29:43.406 { 00:29:43.406 "uuid": "90c0b84a-aa1a-450b-8cb2-b81014b91d5a", 00:29:43.406 "method": "bdev_lvol_get_lvstores", 00:29:43.406 "req_id": 1 00:29:43.406 } 00:29:43.406 Got JSON-RPC error response 00:29:43.406 response: 00:29:43.406 { 00:29:43.406 "code": -19, 00:29:43.406 "message": "No such device" 00:29:43.406 } 00:29:43.406 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:43.406 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:43.406 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:43.406 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:43.406 23:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:43.664 aio_bdev 00:29:43.664 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f2a6b3eb-b555-41ad-9a9e-f5c58110661c 00:29:43.664 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f2a6b3eb-b555-41ad-9a9e-f5c58110661c 00:29:43.664 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:43.664 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:43.664 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:43.664 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:43.664 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:43.922 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2a6b3eb-b555-41ad-9a9e-f5c58110661c -t 2000 00:29:44.179 [ 00:29:44.179 { 00:29:44.179 "name": "f2a6b3eb-b555-41ad-9a9e-f5c58110661c", 00:29:44.179 "aliases": [ 00:29:44.179 "lvs/lvol" 00:29:44.179 ], 00:29:44.179 "product_name": "Logical Volume", 00:29:44.179 "block_size": 4096, 00:29:44.179 "num_blocks": 38912, 00:29:44.179 "uuid": "f2a6b3eb-b555-41ad-9a9e-f5c58110661c", 00:29:44.179 "assigned_rate_limits": { 00:29:44.179 "rw_ios_per_sec": 0, 00:29:44.179 "rw_mbytes_per_sec": 0, 00:29:44.179 "r_mbytes_per_sec": 0, 00:29:44.179 "w_mbytes_per_sec": 0 00:29:44.179 }, 00:29:44.179 "claimed": false, 00:29:44.179 "zoned": false, 00:29:44.179 "supported_io_types": { 00:29:44.179 "read": true, 00:29:44.179 "write": true, 00:29:44.179 "unmap": true, 00:29:44.179 "flush": false, 00:29:44.179 "reset": true, 00:29:44.179 "nvme_admin": false, 00:29:44.179 "nvme_io": false, 00:29:44.179 "nvme_io_md": false, 00:29:44.179 "write_zeroes": true, 00:29:44.179 "zcopy": false, 00:29:44.179 "get_zone_info": false, 00:29:44.179 "zone_management": false, 00:29:44.179 "zone_append": false, 00:29:44.179 "compare": false, 00:29:44.179 "compare_and_write": false, 00:29:44.179 "abort": false, 00:29:44.179 "seek_hole": true, 00:29:44.179 "seek_data": true, 00:29:44.179 "copy": false, 00:29:44.179 "nvme_iov_md": false 00:29:44.179 }, 00:29:44.179 "driver_specific": { 00:29:44.180 "lvol": { 00:29:44.180 "lvol_store_uuid": "90c0b84a-aa1a-450b-8cb2-b81014b91d5a", 00:29:44.180 "base_bdev": "aio_bdev", 00:29:44.180 "thin_provision": false, 00:29:44.180 "num_allocated_clusters": 38, 00:29:44.180 "snapshot": false, 00:29:44.180 "clone": false, 00:29:44.180 "esnap_clone": false 00:29:44.180 } 00:29:44.180 } 00:29:44.180 } 00:29:44.180 ] 00:29:44.180 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:44.180 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:44.180 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:44.439 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:44.439 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:44.439 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:44.697 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:44.697 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2a6b3eb-b555-41ad-9a9e-f5c58110661c 00:29:44.958 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90c0b84a-aa1a-450b-8cb2-b81014b91d5a 00:29:45.218 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:45.477 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:45.735 00:29:45.735 real 0m19.544s 00:29:45.735 user 0m36.597s 00:29:45.735 sys 0m4.701s 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:45.735 ************************************ 00:29:45.735 END TEST lvs_grow_dirty 00:29:45.735 ************************************ 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:45.735 nvmf_trace.0 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.735 rmmod nvme_tcp 00:29:45.735 rmmod nvme_fabrics 00:29:45.735 rmmod nvme_keyring 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 206314 ']' 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 206314 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 206314 ']' 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 206314 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206314 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206314' 00:29:45.735 killing process with pid 206314 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 206314 00:29:45.735 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 206314 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.993 23:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.899 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.899 00:29:47.899 real 0m42.805s 00:29:47.899 user 0m55.734s 00:29:47.899 sys 0m8.542s 00:29:47.899 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.899 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:47.899 ************************************ 00:29:47.899 END TEST nvmf_lvs_grow 00:29:47.899 ************************************ 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:48.159 ************************************ 00:29:48.159 START TEST nvmf_bdev_io_wait 00:29:48.159 ************************************ 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:48.159 * Looking for test storage... 00:29:48.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:48.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.159 --rc genhtml_branch_coverage=1 00:29:48.159 --rc genhtml_function_coverage=1 00:29:48.159 --rc genhtml_legend=1 00:29:48.159 --rc geninfo_all_blocks=1 00:29:48.159 --rc geninfo_unexecuted_blocks=1 00:29:48.159 00:29:48.159 ' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:48.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.159 --rc genhtml_branch_coverage=1 00:29:48.159 --rc genhtml_function_coverage=1 00:29:48.159 --rc genhtml_legend=1 00:29:48.159 --rc geninfo_all_blocks=1 00:29:48.159 --rc geninfo_unexecuted_blocks=1 00:29:48.159 00:29:48.159 ' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:48.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.159 --rc genhtml_branch_coverage=1 00:29:48.159 --rc genhtml_function_coverage=1 00:29:48.159 --rc genhtml_legend=1 00:29:48.159 --rc geninfo_all_blocks=1 00:29:48.159 --rc geninfo_unexecuted_blocks=1 00:29:48.159 00:29:48.159 ' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:48.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.159 --rc genhtml_branch_coverage=1 00:29:48.159 --rc genhtml_function_coverage=1 00:29:48.159 --rc genhtml_legend=1 00:29:48.159 --rc geninfo_all_blocks=1 00:29:48.159 --rc geninfo_unexecuted_blocks=1 00:29:48.159 00:29:48.159 ' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.159 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.160 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.692 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:29:50.693 00:29:50.693 --- 10.0.0.2 ping statistics --- 00:29:50.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.693 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:29:50.693 00:29:50.693 --- 10.0.0.1 ping statistics --- 00:29:50.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.693 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:50.693 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=208960 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 208960 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 208960 ']' 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.694 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.694 [2024-12-10 23:00:58.221033] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:50.694 [2024-12-10 23:00:58.222107] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:50.694 [2024-12-10 23:00:58.222162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.694 [2024-12-10 23:00:58.295333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.694 [2024-12-10 23:00:58.354078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.694 [2024-12-10 23:00:58.354126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.694 [2024-12-10 23:00:58.354153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.694 [2024-12-10 23:00:58.354164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.694 [2024-12-10 23:00:58.354173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.694 [2024-12-10 23:00:58.355798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.694 [2024-12-10 23:00:58.355844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.694 [2024-12-10 23:00:58.355901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.694 [2024-12-10 23:00:58.355904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.694 [2024-12-10 23:00:58.356368] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 [2024-12-10 23:00:58.541991] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:50.954 [2024-12-10 23:00:58.542195] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:50.954 [2024-12-10 23:00:58.543107] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:50.954 [2024-12-10 23:00:58.543976] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 [2024-12-10 23:00:58.552643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 Malloc0 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.954 [2024-12-10 23:00:58.608778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=208989 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=208991 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.954 { 00:29:50.954 "params": { 00:29:50.954 "name": "Nvme$subsystem", 00:29:50.954 "trtype": "$TEST_TRANSPORT", 00:29:50.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.954 "adrfam": "ipv4", 00:29:50.954 "trsvcid": "$NVMF_PORT", 00:29:50.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.954 "hdgst": ${hdgst:-false}, 00:29:50.954 "ddgst": ${ddgst:-false} 00:29:50.954 }, 00:29:50.954 "method": "bdev_nvme_attach_controller" 00:29:50.954 } 00:29:50.954 EOF 00:29:50.954 )") 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=208993 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.954 { 00:29:50.954 "params": { 00:29:50.954 "name": "Nvme$subsystem", 00:29:50.954 "trtype": "$TEST_TRANSPORT", 00:29:50.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.954 "adrfam": "ipv4", 00:29:50.954 "trsvcid": "$NVMF_PORT", 00:29:50.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.954 "hdgst": ${hdgst:-false}, 00:29:50.954 "ddgst": ${ddgst:-false} 00:29:50.954 }, 00:29:50.954 "method": "bdev_nvme_attach_controller" 00:29:50.954 } 00:29:50.954 EOF 00:29:50.954 )") 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=208996 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.954 { 00:29:50.954 "params": { 00:29:50.954 "name": "Nvme$subsystem", 00:29:50.954 "trtype": "$TEST_TRANSPORT", 00:29:50.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.954 "adrfam": "ipv4", 00:29:50.954 "trsvcid": "$NVMF_PORT", 00:29:50.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.954 "hdgst": ${hdgst:-false}, 00:29:50.954 "ddgst": ${ddgst:-false} 00:29:50.954 }, 00:29:50.954 "method": "bdev_nvme_attach_controller" 00:29:50.954 } 00:29:50.954 EOF 00:29:50.954 )") 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:50.954 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.955 { 00:29:50.955 "params": { 00:29:50.955 "name": "Nvme$subsystem", 00:29:50.955 "trtype": "$TEST_TRANSPORT", 00:29:50.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.955 "adrfam": "ipv4", 00:29:50.955 "trsvcid": "$NVMF_PORT", 00:29:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.955 "hdgst": ${hdgst:-false}, 00:29:50.955 "ddgst": ${ddgst:-false} 00:29:50.955 }, 00:29:50.955 "method": "bdev_nvme_attach_controller" 00:29:50.955 } 00:29:50.955 EOF 00:29:50.955 )") 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 208989 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.955 "params": { 00:29:50.955 "name": "Nvme1", 00:29:50.955 "trtype": "tcp", 00:29:50.955 "traddr": "10.0.0.2", 00:29:50.955 "adrfam": "ipv4", 00:29:50.955 "trsvcid": "4420", 00:29:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.955 "hdgst": false, 00:29:50.955 "ddgst": false 00:29:50.955 }, 00:29:50.955 "method": "bdev_nvme_attach_controller" 00:29:50.955 }' 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.955 "params": { 00:29:50.955 "name": "Nvme1", 00:29:50.955 "trtype": "tcp", 00:29:50.955 "traddr": "10.0.0.2", 00:29:50.955 "adrfam": "ipv4", 00:29:50.955 "trsvcid": "4420", 00:29:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.955 "hdgst": false, 00:29:50.955 "ddgst": false 00:29:50.955 }, 00:29:50.955 "method": "bdev_nvme_attach_controller" 00:29:50.955 }' 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.955 "params": { 00:29:50.955 "name": "Nvme1", 00:29:50.955 "trtype": "tcp", 00:29:50.955 "traddr": "10.0.0.2", 00:29:50.955 "adrfam": "ipv4", 00:29:50.955 "trsvcid": "4420", 00:29:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.955 "hdgst": false, 00:29:50.955 "ddgst": false 00:29:50.955 }, 00:29:50.955 "method": "bdev_nvme_attach_controller" 00:29:50.955 }' 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:50.955 23:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.955 "params": { 00:29:50.955 "name": "Nvme1", 00:29:50.955 "trtype": "tcp", 00:29:50.955 "traddr": "10.0.0.2", 00:29:50.955 "adrfam": "ipv4", 00:29:50.955 "trsvcid": "4420", 00:29:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.955 "hdgst": false, 00:29:50.955 "ddgst": false 00:29:50.955 }, 00:29:50.955 "method": "bdev_nvme_attach_controller" 00:29:50.955 }' 00:29:50.955 [2024-12-10 23:00:58.661250] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:50.955 [2024-12-10 23:00:58.661252] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:50.955 [2024-12-10 23:00:58.661251] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:50.955 [2024-12-10 23:00:58.661247] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:50.955 [2024-12-10 23:00:58.661334] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:50.955 [2024-12-10 23:00:58.661335] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 23:00:58.661334] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 23:00:58.661333] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:50.955 --proc-type=auto ] 00:29:50.955 --proc-type=auto ] 00:29:51.214 [2024-12-10 23:00:58.835045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.214 [2024-12-10 23:00:58.888664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:29:51.214 [2024-12-10 23:00:58.931890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.475 [2024-12-10 23:00:58.986400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.475 [2024-12-10 23:00:59.031464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.475 [2024-12-10 23:00:59.084832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:29:51.475 [2024-12-10 23:00:59.131759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.475 [2024-12-10 23:00:59.185120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:29:51.735 Running I/O for 1 seconds... 00:29:51.735 Running I/O for 1 seconds... 00:29:51.735 Running I/O for 1 seconds... 00:29:51.993 Running I/O for 1 seconds... 00:29:52.928 10410.00 IOPS, 40.66 MiB/s 00:29:52.928 Latency(us) 00:29:52.928 [2024-12-10T22:01:00.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.928 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:52.928 Nvme1n1 : 1.01 10454.05 40.84 0.00 0.00 12193.70 3956.43 17573.36 00:29:52.928 [2024-12-10T22:01:00.660Z] =================================================================================================================== 00:29:52.928 [2024-12-10T22:01:00.660Z] Total : 10454.05 40.84 0.00 0.00 12193.70 3956.43 17573.36 00:29:52.928 7039.00 IOPS, 27.50 MiB/s 00:29:52.928 Latency(us) 00:29:52.928 [2024-12-10T22:01:00.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.928 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:52.928 Nvme1n1 : 1.01 7115.02 27.79 0.00 0.00 17908.17 2087.44 21456.97 00:29:52.928 [2024-12-10T22:01:00.660Z] =================================================================================================================== 00:29:52.928 [2024-12-10T22:01:00.660Z] Total : 7115.02 27.79 0.00 0.00 17908.17 2087.44 21456.97 00:29:52.928 185968.00 IOPS, 726.44 MiB/s 00:29:52.928 Latency(us) 00:29:52.928 [2024-12-10T22:01:00.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.928 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:52.928 Nvme1n1 : 1.00 185619.05 725.07 0.00 0.00 685.81 288.24 1844.72 00:29:52.928 [2024-12-10T22:01:00.660Z] =================================================================================================================== 00:29:52.928 [2024-12-10T22:01:00.660Z] Total : 185619.05 725.07 0.00 0.00 685.81 288.24 1844.72 00:29:52.928 9773.00 IOPS, 38.18 MiB/s 00:29:52.928 Latency(us) 00:29:52.928 [2024-12-10T22:01:00.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.928 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:52.928 Nvme1n1 : 1.01 9853.47 38.49 0.00 0.00 12947.52 2439.40 19029.71 00:29:52.928 [2024-12-10T22:01:00.660Z] =================================================================================================================== 00:29:52.928 [2024-12-10T22:01:00.660Z] Total : 9853.47 38.49 0.00 0.00 12947.52 2439.40 19029.71 00:29:52.928 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 208991 00:29:52.928 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 208993 00:29:52.928 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 208996 00:29:52.928 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.189 rmmod nvme_tcp 00:29:53.189 rmmod nvme_fabrics 00:29:53.189 rmmod nvme_keyring 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 208960 ']' 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 208960 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 208960 ']' 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 208960 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 208960 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 208960' 00:29:53.189 killing process with pid 208960 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 208960 00:29:53.189 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 208960 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.448 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.353 00:29:55.353 real 0m7.332s 00:29:55.353 user 0m14.526s 00:29:55.353 sys 0m4.219s 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:55.353 ************************************ 00:29:55.353 END TEST nvmf_bdev_io_wait 00:29:55.353 ************************************ 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:55.353 ************************************ 00:29:55.353 START TEST nvmf_queue_depth 00:29:55.353 ************************************ 00:29:55.353 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:55.612 * Looking for test storage... 00:29:55.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.612 --rc genhtml_branch_coverage=1 00:29:55.612 --rc genhtml_function_coverage=1 00:29:55.612 --rc genhtml_legend=1 00:29:55.612 --rc geninfo_all_blocks=1 00:29:55.612 --rc geninfo_unexecuted_blocks=1 00:29:55.612 00:29:55.612 ' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.612 --rc genhtml_branch_coverage=1 00:29:55.612 --rc genhtml_function_coverage=1 00:29:55.612 --rc genhtml_legend=1 00:29:55.612 --rc geninfo_all_blocks=1 00:29:55.612 --rc geninfo_unexecuted_blocks=1 00:29:55.612 00:29:55.612 ' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.612 --rc genhtml_branch_coverage=1 00:29:55.612 --rc genhtml_function_coverage=1 00:29:55.612 --rc genhtml_legend=1 00:29:55.612 --rc geninfo_all_blocks=1 00:29:55.612 --rc geninfo_unexecuted_blocks=1 00:29:55.612 00:29:55.612 ' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:55.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.612 --rc genhtml_branch_coverage=1 00:29:55.612 --rc genhtml_function_coverage=1 00:29:55.612 --rc genhtml_legend=1 00:29:55.612 --rc geninfo_all_blocks=1 00:29:55.612 --rc geninfo_unexecuted_blocks=1 00:29:55.612 00:29:55.612 ' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.612 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.613 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.155 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:58.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:58.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:58.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:58.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:29:58.156 00:29:58.156 --- 10.0.0.2 ping statistics --- 00:29:58.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.156 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:58.156 00:29:58.156 --- 10.0.0.1 ping statistics --- 00:29:58.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.156 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=211219 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 211219 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 211219 ']' 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.156 [2024-12-10 23:01:05.490729] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:58.156 [2024-12-10 23:01:05.491759] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:58.156 [2024-12-10 23:01:05.491828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.156 [2024-12-10 23:01:05.567390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.156 [2024-12-10 23:01:05.624736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.156 [2024-12-10 23:01:05.624792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.156 [2024-12-10 23:01:05.624821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.156 [2024-12-10 23:01:05.624833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.156 [2024-12-10 23:01:05.624843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.156 [2024-12-10 23:01:05.625441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.156 [2024-12-10 23:01:05.714722] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:58.156 [2024-12-10 23:01:05.715061] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.156 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.157 [2024-12-10 23:01:05.770032] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.157 Malloc0 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.157 [2024-12-10 23:01:05.834188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=211238 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 211238 /var/tmp/bdevperf.sock 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 211238 ']' 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:58.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.157 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.415 [2024-12-10 23:01:05.886744] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:58.415 [2024-12-10 23:01:05.886818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211238 ] 00:29:58.415 [2024-12-10 23:01:05.953458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.415 [2024-12-10 23:01:06.012696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.415 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.415 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:58.415 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:58.415 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.415 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:58.672 NVMe0n1 00:29:58.672 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.672 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:58.932 Running I/O for 10 seconds... 00:30:00.803 8192.00 IOPS, 32.00 MiB/s [2024-12-10T22:01:09.911Z] 8565.50 IOPS, 33.46 MiB/s [2024-12-10T22:01:10.509Z] 8536.33 IOPS, 33.35 MiB/s [2024-12-10T22:01:11.883Z] 8599.00 IOPS, 33.59 MiB/s [2024-12-10T22:01:12.819Z] 8602.20 IOPS, 33.60 MiB/s [2024-12-10T22:01:13.756Z] 8690.00 IOPS, 33.95 MiB/s [2024-12-10T22:01:14.691Z] 8664.14 IOPS, 33.84 MiB/s [2024-12-10T22:01:15.623Z] 8699.75 IOPS, 33.98 MiB/s [2024-12-10T22:01:16.557Z] 8709.56 IOPS, 34.02 MiB/s [2024-12-10T22:01:16.817Z] 8712.20 IOPS, 34.03 MiB/s 00:30:09.085 Latency(us) 00:30:09.085 [2024-12-10T22:01:16.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.085 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:09.085 Verification LBA range: start 0x0 length 0x4000 00:30:09.085 NVMe0n1 : 10.07 8756.44 34.20 0.00 0.00 116471.10 11553.75 70293.43 00:30:09.085 [2024-12-10T22:01:16.817Z] =================================================================================================================== 00:30:09.085 [2024-12-10T22:01:16.817Z] Total : 8756.44 34.20 0.00 0.00 116471.10 11553.75 70293.43 00:30:09.085 { 00:30:09.085 "results": [ 00:30:09.085 { 00:30:09.085 "job": "NVMe0n1", 00:30:09.085 "core_mask": "0x1", 00:30:09.085 "workload": "verify", 00:30:09.085 "status": "finished", 00:30:09.085 "verify_range": { 00:30:09.085 "start": 0, 00:30:09.085 "length": 16384 00:30:09.085 }, 00:30:09.085 "queue_depth": 1024, 00:30:09.085 "io_size": 4096, 00:30:09.085 "runtime": 10.066422, 00:30:09.085 "iops": 8756.43798759877, 00:30:09.085 "mibps": 34.2048358890577, 00:30:09.085 "io_failed": 0, 00:30:09.085 "io_timeout": 0, 00:30:09.085 "avg_latency_us": 116471.09902879986, 00:30:09.085 "min_latency_us": 11553.754074074073, 00:30:09.085 "max_latency_us": 70293.42814814814 00:30:09.085 } 00:30:09.085 ], 00:30:09.085 "core_count": 1 00:30:09.085 } 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 211238 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 211238 ']' 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 211238 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211238 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211238' 00:30:09.085 killing process with pid 211238 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 211238 00:30:09.085 Received shutdown signal, test time was about 10.000000 seconds 00:30:09.085 00:30:09.085 Latency(us) 00:30:09.085 [2024-12-10T22:01:16.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.085 [2024-12-10T22:01:16.817Z] =================================================================================================================== 00:30:09.085 [2024-12-10T22:01:16.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:09.085 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 211238 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:09.345 rmmod nvme_tcp 00:30:09.345 rmmod nvme_fabrics 00:30:09.345 rmmod nvme_keyring 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 211219 ']' 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 211219 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 211219 ']' 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 211219 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211219 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211219' 00:30:09.345 killing process with pid 211219 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 211219 00:30:09.345 23:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 211219 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.603 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.137 00:30:12.137 real 0m16.195s 00:30:12.137 user 0m22.431s 00:30:12.137 sys 0m3.360s 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.137 ************************************ 00:30:12.137 END TEST nvmf_queue_depth 00:30:12.137 ************************************ 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:12.137 ************************************ 00:30:12.137 START TEST nvmf_target_multipath 00:30:12.137 ************************************ 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:12.137 * Looking for test storage... 00:30:12.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.137 --rc genhtml_branch_coverage=1 00:30:12.137 --rc genhtml_function_coverage=1 00:30:12.137 --rc genhtml_legend=1 00:30:12.137 --rc geninfo_all_blocks=1 00:30:12.137 --rc geninfo_unexecuted_blocks=1 00:30:12.137 00:30:12.137 ' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.137 --rc genhtml_branch_coverage=1 00:30:12.137 --rc genhtml_function_coverage=1 00:30:12.137 --rc genhtml_legend=1 00:30:12.137 --rc geninfo_all_blocks=1 00:30:12.137 --rc geninfo_unexecuted_blocks=1 00:30:12.137 00:30:12.137 ' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.137 --rc genhtml_branch_coverage=1 00:30:12.137 --rc genhtml_function_coverage=1 00:30:12.137 --rc genhtml_legend=1 00:30:12.137 --rc geninfo_all_blocks=1 00:30:12.137 --rc geninfo_unexecuted_blocks=1 00:30:12.137 00:30:12.137 ' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.137 --rc genhtml_branch_coverage=1 00:30:12.137 --rc genhtml_function_coverage=1 00:30:12.137 --rc genhtml_legend=1 00:30:12.137 --rc geninfo_all_blocks=1 00:30:12.137 --rc geninfo_unexecuted_blocks=1 00:30:12.137 00:30:12.137 ' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.137 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:12.138 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:14.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:14.041 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:14.041 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:14.041 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.041 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:30:14.042 00:30:14.042 --- 10.0.0.2 ping statistics --- 00:30:14.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.042 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:30:14.042 00:30:14.042 --- 10.0.0.1 ping statistics --- 00:30:14.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.042 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:14.042 only one NIC for nvmf test 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.042 rmmod nvme_tcp 00:30:14.042 rmmod nvme_fabrics 00:30:14.042 rmmod nvme_keyring 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.042 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.579 00:30:16.579 real 0m4.456s 00:30:16.579 user 0m0.867s 00:30:16.579 sys 0m1.609s 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:16.579 ************************************ 00:30:16.579 END TEST nvmf_target_multipath 00:30:16.579 ************************************ 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:16.579 ************************************ 00:30:16.579 START TEST nvmf_zcopy 00:30:16.579 ************************************ 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:16.579 * Looking for test storage... 00:30:16.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.579 --rc genhtml_branch_coverage=1 00:30:16.579 --rc genhtml_function_coverage=1 00:30:16.579 --rc genhtml_legend=1 00:30:16.579 --rc geninfo_all_blocks=1 00:30:16.579 --rc geninfo_unexecuted_blocks=1 00:30:16.579 00:30:16.579 ' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.579 --rc genhtml_branch_coverage=1 00:30:16.579 --rc genhtml_function_coverage=1 00:30:16.579 --rc genhtml_legend=1 00:30:16.579 --rc geninfo_all_blocks=1 00:30:16.579 --rc geninfo_unexecuted_blocks=1 00:30:16.579 00:30:16.579 ' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.579 --rc genhtml_branch_coverage=1 00:30:16.579 --rc genhtml_function_coverage=1 00:30:16.579 --rc genhtml_legend=1 00:30:16.579 --rc geninfo_all_blocks=1 00:30:16.579 --rc geninfo_unexecuted_blocks=1 00:30:16.579 00:30:16.579 ' 00:30:16.579 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.579 --rc genhtml_branch_coverage=1 00:30:16.579 --rc genhtml_function_coverage=1 00:30:16.580 --rc genhtml_legend=1 00:30:16.580 --rc geninfo_all_blocks=1 00:30:16.580 --rc geninfo_unexecuted_blocks=1 00:30:16.580 00:30:16.580 ' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.580 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:18.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:18.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:18.484 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.484 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:18.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:30:18.485 00:30:18.485 --- 10.0.0.2 ping statistics --- 00:30:18.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.485 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:30:18.485 00:30:18.485 --- 10.0.0.1 ping statistics --- 00:30:18.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.485 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=216417 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 216417 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 216417 ']' 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.485 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:18.744 [2024-12-10 23:01:26.249217] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:18.744 [2024-12-10 23:01:26.250281] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:30:18.744 [2024-12-10 23:01:26.250335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.744 [2024-12-10 23:01:26.325829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.744 [2024-12-10 23:01:26.383941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.744 [2024-12-10 23:01:26.384001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.744 [2024-12-10 23:01:26.384032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.744 [2024-12-10 23:01:26.384044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.744 [2024-12-10 23:01:26.384054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.744 [2024-12-10 23:01:26.384724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.003 [2024-12-10 23:01:26.481725] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:19.003 [2024-12-10 23:01:26.482031] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.003 [2024-12-10 23:01:26.533383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.003 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.004 [2024-12-10 23:01:26.549622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.004 malloc0 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:19.004 { 00:30:19.004 "params": { 00:30:19.004 "name": "Nvme$subsystem", 00:30:19.004 "trtype": "$TEST_TRANSPORT", 00:30:19.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.004 "adrfam": "ipv4", 00:30:19.004 "trsvcid": "$NVMF_PORT", 00:30:19.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.004 "hdgst": ${hdgst:-false}, 00:30:19.004 "ddgst": ${ddgst:-false} 00:30:19.004 }, 00:30:19.004 "method": "bdev_nvme_attach_controller" 00:30:19.004 } 00:30:19.004 EOF 00:30:19.004 )") 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:19.004 23:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:19.004 "params": { 00:30:19.004 "name": "Nvme1", 00:30:19.004 "trtype": "tcp", 00:30:19.004 "traddr": "10.0.0.2", 00:30:19.004 "adrfam": "ipv4", 00:30:19.004 "trsvcid": "4420", 00:30:19.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:19.004 "hdgst": false, 00:30:19.004 "ddgst": false 00:30:19.004 }, 00:30:19.004 "method": "bdev_nvme_attach_controller" 00:30:19.004 }' 00:30:19.004 [2024-12-10 23:01:26.638521] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:30:19.004 [2024-12-10 23:01:26.638640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216444 ] 00:30:19.004 [2024-12-10 23:01:26.712845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.263 [2024-12-10 23:01:26.770959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.522 Running I/O for 10 seconds... 00:30:21.412 5633.00 IOPS, 44.01 MiB/s [2024-12-10T22:01:30.524Z] 5693.50 IOPS, 44.48 MiB/s [2024-12-10T22:01:31.093Z] 5699.00 IOPS, 44.52 MiB/s [2024-12-10T22:01:32.471Z] 5707.75 IOPS, 44.59 MiB/s [2024-12-10T22:01:33.410Z] 5717.80 IOPS, 44.67 MiB/s [2024-12-10T22:01:34.344Z] 5717.50 IOPS, 44.67 MiB/s [2024-12-10T22:01:35.282Z] 5720.14 IOPS, 44.69 MiB/s [2024-12-10T22:01:36.218Z] 5722.38 IOPS, 44.71 MiB/s [2024-12-10T22:01:37.157Z] 5724.33 IOPS, 44.72 MiB/s [2024-12-10T22:01:37.157Z] 5721.50 IOPS, 44.70 MiB/s 00:30:29.425 Latency(us) 00:30:29.425 [2024-12-10T22:01:37.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.425 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:29.425 Verification LBA range: start 0x0 length 0x1000 00:30:29.425 Nvme1n1 : 10.01 5726.05 44.73 0.00 0.00 22292.19 649.29 29127.11 00:30:29.425 [2024-12-10T22:01:37.157Z] =================================================================================================================== 00:30:29.425 [2024-12-10T22:01:37.157Z] Total : 5726.05 44.73 0.00 0.00 22292.19 649.29 29127.11 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=217743 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:29.685 { 00:30:29.685 "params": { 00:30:29.685 "name": "Nvme$subsystem", 00:30:29.685 "trtype": "$TEST_TRANSPORT", 00:30:29.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.685 "adrfam": "ipv4", 00:30:29.685 "trsvcid": "$NVMF_PORT", 00:30:29.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.685 "hdgst": ${hdgst:-false}, 00:30:29.685 "ddgst": ${ddgst:-false} 00:30:29.685 }, 00:30:29.685 "method": "bdev_nvme_attach_controller" 00:30:29.685 } 00:30:29.685 EOF 00:30:29.685 )") 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:29.685 [2024-12-10 23:01:37.337321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.337368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:29.685 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:29.685 "params": { 00:30:29.685 "name": "Nvme1", 00:30:29.685 "trtype": "tcp", 00:30:29.685 "traddr": "10.0.0.2", 00:30:29.685 "adrfam": "ipv4", 00:30:29.685 "trsvcid": "4420", 00:30:29.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:29.685 "hdgst": false, 00:30:29.685 "ddgst": false 00:30:29.685 }, 00:30:29.685 "method": "bdev_nvme_attach_controller" 00:30:29.685 }' 00:30:29.685 [2024-12-10 23:01:37.345244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.345265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.353242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.353262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.361241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.361260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.369244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.369265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.376354] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:30:29.685 [2024-12-10 23:01:37.376426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217743 ] 00:30:29.685 [2024-12-10 23:01:37.377242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.377261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.385246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.385266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.393241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.393260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.401241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.401260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.685 [2024-12-10 23:01:37.409242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.685 [2024-12-10 23:01:37.409262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.417249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.417270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.425242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.425261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.433241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.433259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.441242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.441261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.445869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.946 [2024-12-10 23:01:37.449241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.449266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.457282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.457317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.465262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.465289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.473243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.473263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.481241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.481260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.489241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.489260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.497240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.497259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.505242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.505261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.506384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.946 [2024-12-10 23:01:37.513242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.513261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.521256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.521281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.529283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.529319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.537279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.537316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.545283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.545321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.553289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.553327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.561288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.561327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.569289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.569332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.577248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.577271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.585288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.585322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.593283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.593330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.601287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.601325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.609249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.609270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.617242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.617261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.625452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.625492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.633247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.633269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.641333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.641357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.649249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.649272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.657246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.657266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.665245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.665265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.946 [2024-12-10 23:01:37.673262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.946 [2024-12-10 23:01:37.673283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.681260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.681280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.689261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.689281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.697245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.697266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.705247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.705283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.713242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.713261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.721242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.721262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.729242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.729261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.737241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.737260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.745248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.745277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.753243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.753263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.761242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.761260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.769242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.769261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.777242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.777261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.785244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.785264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.793248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.793270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.801964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.801991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.809251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.809273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 Running I/O for 5 seconds... 00:30:30.206 [2024-12-10 23:01:37.820559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.820587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.833566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.833595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.842970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.842995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.854872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.854911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.871471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.871511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.887709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.887736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.903790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.903817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.917627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.917669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.206 [2024-12-10 23:01:37.927096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.206 [2024-12-10 23:01:37.927121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:37.938977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:37.939017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:37.955722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:37.955762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:37.970576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:37.970603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:37.979882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:37.979907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:37.994871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:37.994910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:38.011261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:38.011300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:38.028938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:38.028961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:38.038748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:38.038775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.464 [2024-12-10 23:01:38.050433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.464 [2024-12-10 23:01:38.050458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.061111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.061135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.072803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.072844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.086923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.086951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.096958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.096984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.109417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.109443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.120704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.120732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.131780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.131808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.147248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.147273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.156832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.156858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.168610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.168637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.465 [2024-12-10 23:01:38.179458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.465 [2024-12-10 23:01:38.179482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.195773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.195800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.211358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.211397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.221248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.221273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.233127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.233150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.244167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.244192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.255269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.255294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.270615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.270641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.280520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.280569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.292302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.292327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.304929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.304956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.314883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.314908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.326785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.326811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.342957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.342982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.352184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.352208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.368197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.368224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.378243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.378269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.390097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.390135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.401186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.401211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.412275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.412300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.425061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.425087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.435138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.435164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.723 [2024-12-10 23:01:38.447344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.723 [2024-12-10 23:01:38.447370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.463408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.463435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.481172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.481196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.490852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.490878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.502731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.502758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.517902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.517943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.527379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.527403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.542124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.542163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.552499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.552524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.564449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.564474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.575860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.575892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.589041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.589067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.599087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.599110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.611064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.611089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.627305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.627347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.643701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.643729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.659234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.659260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.668925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.668951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.680815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.680860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.691405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.691429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.981 [2024-12-10 23:01:38.707227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.981 [2024-12-10 23:01:38.707266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.723244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.723286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.741349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.741374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.750921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.750947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.762924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.762949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.779130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.779155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.788597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.788640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.800461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.800487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.811088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.811111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 11547.00 IOPS, 90.21 MiB/s [2024-12-10T22:01:38.975Z] [2024-12-10 23:01:38.826409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.826435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.836504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.836551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.848591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.848618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.861626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.861653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.871318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.871344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.883241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.883266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.899398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.899447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.917214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.917240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.927555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.927580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.942347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.942385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.951976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.952014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.243 [2024-12-10 23:01:38.967308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.243 [2024-12-10 23:01:38.967335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:38.985686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:38.985712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:38.995925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:38.995949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.010481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.010505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.020272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.020313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.035524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.035557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.051638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.051666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.066611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.066639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.076146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.076171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.091011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.091035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.107042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.107084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.116467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.116491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.131168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.131192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.147737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.147763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.162774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.162811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.171877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.171916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.186221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.186245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.196678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.196705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.211731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.211759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.525 [2024-12-10 23:01:39.227408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.525 [2024-12-10 23:01:39.227450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.245294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.245330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.255385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.255411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.267046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.267073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.281050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.281076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.290451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.290478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.306497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.306536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.316389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.316429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.331022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.331047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.341284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.341309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.353156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.353184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.364278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.364303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.379405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.379431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.395370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.789 [2024-12-10 23:01:39.395396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.789 [2024-12-10 23:01:39.404976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.405011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.416782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.416809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.427583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.427609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.442701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.442729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.452186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.452214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.467055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.467081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.477158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.477185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.489220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.489245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.500747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.500775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.790 [2024-12-10 23:01:39.513324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.790 [2024-12-10 23:01:39.513350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.522709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.048 [2024-12-10 23:01:39.522736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.539101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.048 [2024-12-10 23:01:39.539126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.557438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.048 [2024-12-10 23:01:39.557463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.567948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.048 [2024-12-10 23:01:39.567972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.583202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.048 [2024-12-10 23:01:39.583242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.601274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.048 [2024-12-10 23:01:39.601301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.611115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.048 [2024-12-10 23:01:39.611139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.048 [2024-12-10 23:01:39.623343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.623368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.639305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.639330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.649400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.649434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.661617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.661644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.672123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.672163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.687967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.688006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.703661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.703689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.718682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.718709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.728294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.728318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.742229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.742253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.752167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.752205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.764270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.764309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.049 [2024-12-10 23:01:39.775610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.049 [2024-12-10 23:01:39.775651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.791818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.791864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.805642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.805671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.815981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.816007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 11498.00 IOPS, 89.83 MiB/s [2024-12-10T22:01:40.039Z] [2024-12-10 23:01:39.830903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.830927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.841181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.841205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.853508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.853554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.864503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.864551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.875914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.875954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.891854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.891881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.906861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.906902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.916133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.916159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.930847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.930872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.940858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.940900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.952996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.953021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.964160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.964202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.975800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.975827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:39.991215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:39.991240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:40.008727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:40.008756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:40.019118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:40.019146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.307 [2024-12-10 23:01:40.030762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.307 [2024-12-10 23:01:40.030793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.046573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.046605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.056930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.056965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.071211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.071236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.086708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.086741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.096943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.096970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.108776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.108802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.119828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.119869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.132804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.132846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.142751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.142780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.154592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.154618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.165462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.165487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.176419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.176444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.189742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.189769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.199924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.199950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.214927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.214952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.233072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.233097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.243306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.243332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.257900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.257925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.268109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.268133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.283097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.283135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.565 [2024-12-10 23:01:40.293059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.565 [2024-12-10 23:01:40.293084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.305021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.305045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.315861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.315901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.331711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.331738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.346908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.346935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.356260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.356309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.368154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.368182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.378176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.378215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.390016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.390042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.400685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.400712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.411329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.411354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.426264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.426305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.436190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.436216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.450696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.450737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.461224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.461250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.473828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.473871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.485074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.485115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.496359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.496384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.823 [2024-12-10 23:01:40.507821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.823 [2024-12-10 23:01:40.507849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.824 [2024-12-10 23:01:40.522189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.824 [2024-12-10 23:01:40.522215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.824 [2024-12-10 23:01:40.532280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.824 [2024-12-10 23:01:40.532305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:32.824 [2024-12-10 23:01:40.544489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:32.824 [2024-12-10 23:01:40.544514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.560732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.560760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.570398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.570422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.586625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.586658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.596757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.596784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.609188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.609216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.620287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.620328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.631656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.631682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.644337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.644363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.654526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.654563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.671184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.671222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.681594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.681621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.693780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.693807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.705408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.705432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.716459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.716484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.730837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.730864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.740476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.740515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.752832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.752871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.763827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.763869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.780090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.780114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.794514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.794565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.082 [2024-12-10 23:01:40.804051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.082 [2024-12-10 23:01:40.804076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.818424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.818457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 11449.67 IOPS, 89.45 MiB/s [2024-12-10T22:01:41.074Z] [2024-12-10 23:01:40.828387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.828412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.842655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.842682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.852234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.852273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.866174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.866214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.877132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.877172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.888341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.888380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.899091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.899114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.914994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.915020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.924390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.924429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.936434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.936458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.950261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.950288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.960007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.960031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.974619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.974645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.990120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.990145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:40.999599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:40.999626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.342 [2024-12-10 23:01:41.013863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.342 [2024-12-10 23:01:41.013887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.343 [2024-12-10 23:01:41.023763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.343 [2024-12-10 23:01:41.023790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.343 [2024-12-10 23:01:41.038088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.343 [2024-12-10 23:01:41.038112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.343 [2024-12-10 23:01:41.048913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.343 [2024-12-10 23:01:41.048946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.343 [2024-12-10 23:01:41.060030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.343 [2024-12-10 23:01:41.060068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.074734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.074761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.084328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.084352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.098375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.098399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.107625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.107652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.122400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.122424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.132556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.132581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.144121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.144144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.159027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.159053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.168636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.168664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.180705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.180744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.192012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.192036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.208141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.208180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.222885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.222912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.232250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.232275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.246239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.246263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.256402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.256427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.270254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.270279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.279870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.279895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.294958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.294983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.313159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.313184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.603 [2024-12-10 23:01:41.323299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.603 [2024-12-10 23:01:41.323323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.338331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.338356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.347908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.347933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.362898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.362923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.372605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.372632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.384637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.384662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.395366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.395390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.408885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.408926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.418667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.418693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.434775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.434801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.444413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.444438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.459218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.459242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.474083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.474125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.483491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.483517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.499536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.499576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.514980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.515007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.524486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.524511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.537139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.537166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.548366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.548392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.559456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.559481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.575266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.575291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.863 [2024-12-10 23:01:41.585554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:33.863 [2024-12-10 23:01:41.585605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.597591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.597619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.608505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.608531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.621953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.621981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.631683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.631717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.646207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.646232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.656270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.656295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.668166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.668191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.683912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.683937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.698631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.698659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.708295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.708320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.723097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.723121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.738195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.738221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.747805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.747847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.763955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.763981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.779835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.779862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.793671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.793698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.803614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.803640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.818890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.818913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 11478.75 IOPS, 89.68 MiB/s [2024-12-10T22:01:41.854Z] [2024-12-10 23:01:41.828169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.828195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.122 [2024-12-10 23:01:41.842310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.122 [2024-12-10 23:01:41.842335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.852806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.852848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.864813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.864856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.875559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.875609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.890881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.890909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.900677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.900703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.912416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.912441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.927180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.927221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.945146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.945175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.955301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.955326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.969390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.969416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.979396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.979422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:41.994059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:41.994096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:42.003672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:42.003699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:42.018397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:42.018436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:42.036922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:42.036948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:42.046686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:42.046712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:42.062583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:42.062610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:42.081961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:42.081987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.383 [2024-12-10 23:01:42.102044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.383 [2024-12-10 23:01:42.102071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.117686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.117714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.127533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.127566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.142126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.142151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.151593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.151620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.167466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.167491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.182305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.182333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.191646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.191672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.206121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.206147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.223047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.223073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.241117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.241142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.251973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.251998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.265984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.266025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.285858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.285885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.301852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.301894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.311255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.311282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.327061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.327101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.345144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.345171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.355003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.355029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.644 [2024-12-10 23:01:42.370690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.644 [2024-12-10 23:01:42.370717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.380486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.380512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.391965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.391990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.405647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.405674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.415504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.415553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.430041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.430067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.439701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.439727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.454037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.454063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.463839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.463881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.478602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.478628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.498006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.498031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.517299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.517324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.527035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.527072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.542332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.542357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.552518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.552564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.563725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.563752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.579697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.579724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.596980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.597005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.607097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.607122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:34.903 [2024-12-10 23:01:42.622691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:34.903 [2024-12-10 23:01:42.622718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.641106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.641134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.651214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.651241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.665975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.666000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.676250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.676280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.688378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.688403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.699396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.699421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.712443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.712470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.722525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.162 [2024-12-10 23:01:42.722575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.162 [2024-12-10 23:01:42.738121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.738146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.747392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.747419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.762732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.762760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.780868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.780895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.790804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.790848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.806961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.806987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.824893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.824933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 11497.80 IOPS, 89.83 MiB/s [2024-12-10T22:01:42.895Z] [2024-12-10 23:01:42.833958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.833986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.875759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.875785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 00:30:35.163 Latency(us) 00:30:35.163 [2024-12-10T22:01:42.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.163 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:35.163 Nvme1n1 : 5.05 11414.99 89.18 0.00 0.00 11110.36 2463.67 51263.72 00:30:35.163 [2024-12-10T22:01:42.895Z] =================================================================================================================== 00:30:35.163 [2024-12-10T22:01:42.895Z] Total : 11414.99 89.18 0.00 0.00 11110.36 2463.67 51263.72 00:30:35.163 [2024-12-10 23:01:42.881252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.881274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.163 [2024-12-10 23:01:42.889250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.163 [2024-12-10 23:01:42.889272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.897247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.897268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.905323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.905371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.913314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.913360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.921316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.921365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.929318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.929365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.937306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.937349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.945323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.945376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.423 [2024-12-10 23:01:42.953316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.423 [2024-12-10 23:01:42.953364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:42.961311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:42.961360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:42.969319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:42.969364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:42.977325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:42.977375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:42.985321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:42.985372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:42.993314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:42.993356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.001313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.001361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.009314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.009362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.017311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.017360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.025256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.025279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.033247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.033267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.041245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.041265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.049243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.049263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.057292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.057331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.065308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.065356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.073314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.073362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.081247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.081268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.089244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.089263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 [2024-12-10 23:01:43.097243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:35.424 [2024-12-10 23:01:43.097262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:35.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (217743) - No such process 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 217743 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.424 delay0 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.424 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:35.684 [2024-12-10 23:01:43.257731] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:43.810 Initializing NVMe Controllers 00:30:43.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.810 Initialization complete. Launching workers. 00:30:43.810 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 24788 00:30:43.810 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 24906, failed to submit 118 00:30:43.810 success 24835, unsuccessful 71, failed 0 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.810 rmmod nvme_tcp 00:30:43.810 rmmod nvme_fabrics 00:30:43.810 rmmod nvme_keyring 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 216417 ']' 00:30:43.810 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 216417 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 216417 ']' 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 216417 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216417 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216417' 00:30:43.811 killing process with pid 216417 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 216417 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 216417 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.811 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.189 00:30:45.189 real 0m28.929s 00:30:45.189 user 0m41.180s 00:30:45.189 sys 0m10.163s 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.189 ************************************ 00:30:45.189 END TEST nvmf_zcopy 00:30:45.189 ************************************ 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:45.189 ************************************ 00:30:45.189 START TEST nvmf_nmic 00:30:45.189 ************************************ 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:45.189 * Looking for test storage... 00:30:45.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.189 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.448 --rc genhtml_branch_coverage=1 00:30:45.448 --rc genhtml_function_coverage=1 00:30:45.448 --rc genhtml_legend=1 00:30:45.448 --rc geninfo_all_blocks=1 00:30:45.448 --rc geninfo_unexecuted_blocks=1 00:30:45.448 00:30:45.448 ' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.448 --rc genhtml_branch_coverage=1 00:30:45.448 --rc genhtml_function_coverage=1 00:30:45.448 --rc genhtml_legend=1 00:30:45.448 --rc geninfo_all_blocks=1 00:30:45.448 --rc geninfo_unexecuted_blocks=1 00:30:45.448 00:30:45.448 ' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.448 --rc genhtml_branch_coverage=1 00:30:45.448 --rc genhtml_function_coverage=1 00:30:45.448 --rc genhtml_legend=1 00:30:45.448 --rc geninfo_all_blocks=1 00:30:45.448 --rc geninfo_unexecuted_blocks=1 00:30:45.448 00:30:45.448 ' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.448 --rc genhtml_branch_coverage=1 00:30:45.448 --rc genhtml_function_coverage=1 00:30:45.448 --rc genhtml_legend=1 00:30:45.448 --rc geninfo_all_blocks=1 00:30:45.448 --rc geninfo_unexecuted_blocks=1 00:30:45.448 00:30:45.448 ' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.448 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.449 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:47.355 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:47.355 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.355 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:47.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:47.356 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:47.356 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:47.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:47.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:30:47.614 00:30:47.614 --- 10.0.0.2 ping statistics --- 00:30:47.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.614 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:47.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:47.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:30:47.614 00:30:47.614 --- 10.0.0.1 ping statistics --- 00:30:47.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.614 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=221133 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 221133 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 221133 ']' 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.614 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.614 [2024-12-10 23:01:55.206724] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:47.614 [2024-12-10 23:01:55.207786] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:30:47.614 [2024-12-10 23:01:55.207840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.614 [2024-12-10 23:01:55.279415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:47.614 [2024-12-10 23:01:55.340399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.614 [2024-12-10 23:01:55.340450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.614 [2024-12-10 23:01:55.340481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.614 [2024-12-10 23:01:55.340494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.614 [2024-12-10 23:01:55.340504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.614 [2024-12-10 23:01:55.342228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.614 [2024-12-10 23:01:55.342311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.614 [2024-12-10 23:01:55.342371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.614 [2024-12-10 23:01:55.342374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.872 [2024-12-10 23:01:55.433215] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:47.872 [2024-12-10 23:01:55.433398] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:47.872 [2024-12-10 23:01:55.433706] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:47.872 [2024-12-10 23:01:55.434365] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:47.872 [2024-12-10 23:01:55.434618] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.872 [2024-12-10 23:01:55.487117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.872 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 Malloc0 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 [2024-12-10 23:01:55.555264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:47.873 test case1: single bdev can't be used in multiple subsystems 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 [2024-12-10 23:01:55.579038] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:47.873 [2024-12-10 23:01:55.579069] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:47.873 [2024-12-10 23:01:55.579084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.873 request: 00:30:47.873 { 00:30:47.873 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:47.873 "namespace": { 00:30:47.873 "bdev_name": "Malloc0", 00:30:47.873 "no_auto_visible": false, 00:30:47.873 "hide_metadata": false 00:30:47.873 }, 00:30:47.873 "method": "nvmf_subsystem_add_ns", 00:30:47.873 "req_id": 1 00:30:47.873 } 00:30:47.873 Got JSON-RPC error response 00:30:47.873 response: 00:30:47.873 { 00:30:47.873 "code": -32602, 00:30:47.873 "message": "Invalid parameters" 00:30:47.873 } 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:47.873 Adding namespace failed - expected result. 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:47.873 test case2: host connect to nvmf target in multiple paths 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 [2024-12-10 23:01:55.587107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.873 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:48.133 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:48.393 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:48.393 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:48.393 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:48.393 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:48.393 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:50.295 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:50.296 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:50.296 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:50.296 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:50.296 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:50.296 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:50.296 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:50.296 [global] 00:30:50.296 thread=1 00:30:50.296 invalidate=1 00:30:50.296 rw=write 00:30:50.296 time_based=1 00:30:50.296 runtime=1 00:30:50.296 ioengine=libaio 00:30:50.296 direct=1 00:30:50.296 bs=4096 00:30:50.296 iodepth=1 00:30:50.296 norandommap=0 00:30:50.296 numjobs=1 00:30:50.296 00:30:50.555 verify_dump=1 00:30:50.555 verify_backlog=512 00:30:50.555 verify_state_save=0 00:30:50.555 do_verify=1 00:30:50.555 verify=crc32c-intel 00:30:50.555 [job0] 00:30:50.555 filename=/dev/nvme0n1 00:30:50.555 Could not set queue depth (nvme0n1) 00:30:50.556 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:50.556 fio-3.35 00:30:50.556 Starting 1 thread 00:30:51.933 00:30:51.933 job0: (groupid=0, jobs=1): err= 0: pid=221631: Tue Dec 10 23:01:59 2024 00:30:51.933 read: IOPS=1265, BW=5060KiB/s (5182kB/s)(5192KiB/1026msec) 00:30:51.933 slat (nsec): min=5123, max=53751, avg=12356.28, stdev=6710.04 00:30:51.933 clat (usec): min=189, max=41032, avg=557.00, stdev=3560.16 00:30:51.933 lat (usec): min=204, max=41045, avg=569.36, stdev=3560.26 00:30:51.933 clat percentiles (usec): 00:30:51.933 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 210], 00:30:51.933 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 249], 00:30:51.933 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:30:51.933 | 99.00th=[ 506], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:51.933 | 99.99th=[41157] 00:30:51.933 write: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec); 0 zone resets 00:30:51.933 slat (usec): min=7, max=30365, avg=31.18, stdev=774.53 00:30:51.933 clat (usec): min=126, max=288, avg=149.29, stdev=13.50 00:30:51.933 lat (usec): min=134, max=30633, avg=180.47, stdev=777.71 00:30:51.933 clat percentiles (usec): 00:30:51.933 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:30:51.933 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:30:51.933 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 161], 95.00th=[ 169], 00:30:51.933 | 99.00th=[ 198], 99.50th=[ 223], 99.90th=[ 269], 99.95th=[ 289], 00:30:51.933 | 99.99th=[ 289] 00:30:51.933 bw ( KiB/s): min= 2688, max= 9600, per=100.00%, avg=6144.00, stdev=4887.52, samples=2 00:30:51.933 iops : min= 672, max= 2400, avg=1536.00, stdev=1221.88, samples=2 00:30:51.933 lat (usec) : 250=81.69%, 500=17.85%, 750=0.11% 00:30:51.933 lat (msec) : 50=0.35% 00:30:51.933 cpu : usr=1.95%, sys=3.22%, ctx=2836, majf=0, minf=1 00:30:51.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.933 issued rwts: total=1298,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:51.933 00:30:51.933 Run status group 0 (all jobs): 00:30:51.933 READ: bw=5060KiB/s (5182kB/s), 5060KiB/s-5060KiB/s (5182kB/s-5182kB/s), io=5192KiB (5317kB), run=1026-1026msec 00:30:51.933 WRITE: bw=5988KiB/s (6132kB/s), 5988KiB/s-5988KiB/s (6132kB/s-6132kB/s), io=6144KiB (6291kB), run=1026-1026msec 00:30:51.933 00:30:51.933 Disk stats (read/write): 00:30:51.933 nvme0n1: ios=1320/1536, merge=0/0, ticks=1522/215, in_queue=1737, util=98.50% 00:30:51.933 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:51.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:51.933 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:51.933 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.934 rmmod nvme_tcp 00:30:51.934 rmmod nvme_fabrics 00:30:51.934 rmmod nvme_keyring 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 221133 ']' 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 221133 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 221133 ']' 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 221133 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221133 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221133' 00:30:51.934 killing process with pid 221133 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 221133 00:30:51.934 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 221133 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.193 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.729 00:30:54.729 real 0m9.107s 00:30:54.729 user 0m16.928s 00:30:54.729 sys 0m3.395s 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:54.729 ************************************ 00:30:54.729 END TEST nvmf_nmic 00:30:54.729 ************************************ 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:54.729 ************************************ 00:30:54.729 START TEST nvmf_fio_target 00:30:54.729 ************************************ 00:30:54.729 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:54.729 * Looking for test storage... 00:30:54.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.729 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:54.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.730 --rc genhtml_branch_coverage=1 00:30:54.730 --rc genhtml_function_coverage=1 00:30:54.730 --rc genhtml_legend=1 00:30:54.730 --rc geninfo_all_blocks=1 00:30:54.730 --rc geninfo_unexecuted_blocks=1 00:30:54.730 00:30:54.730 ' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:54.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.730 --rc genhtml_branch_coverage=1 00:30:54.730 --rc genhtml_function_coverage=1 00:30:54.730 --rc genhtml_legend=1 00:30:54.730 --rc geninfo_all_blocks=1 00:30:54.730 --rc geninfo_unexecuted_blocks=1 00:30:54.730 00:30:54.730 ' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:54.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.730 --rc genhtml_branch_coverage=1 00:30:54.730 --rc genhtml_function_coverage=1 00:30:54.730 --rc genhtml_legend=1 00:30:54.730 --rc geninfo_all_blocks=1 00:30:54.730 --rc geninfo_unexecuted_blocks=1 00:30:54.730 00:30:54.730 ' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:54.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.730 --rc genhtml_branch_coverage=1 00:30:54.730 --rc genhtml_function_coverage=1 00:30:54.730 --rc genhtml_legend=1 00:30:54.730 --rc geninfo_all_blocks=1 00:30:54.730 --rc geninfo_unexecuted_blocks=1 00:30:54.730 00:30:54.730 ' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.730 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:56.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:56.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.637 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:56.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:56.638 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:30:56.638 00:30:56.638 --- 10.0.0.2 ping statistics --- 00:30:56.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.638 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:30:56.638 00:30:56.638 --- 10.0.0.1 ping statistics --- 00:30:56.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.638 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=223707 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 223707 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 223707 ']' 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.638 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.638 [2024-12-10 23:02:04.339804] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.638 [2024-12-10 23:02:04.340919] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:30:56.638 [2024-12-10 23:02:04.340980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.897 [2024-12-10 23:02:04.412205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:56.897 [2024-12-10 23:02:04.467550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.897 [2024-12-10 23:02:04.467613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.897 [2024-12-10 23:02:04.467627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.897 [2024-12-10 23:02:04.467640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.897 [2024-12-10 23:02:04.467651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.897 [2024-12-10 23:02:04.469311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.897 [2024-12-10 23:02:04.469388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.897 [2024-12-10 23:02:04.469424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.897 [2024-12-10 23:02:04.469428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.897 [2024-12-10 23:02:04.563252] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:56.897 [2024-12-10 23:02:04.563510] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:56.897 [2024-12-10 23:02:04.563773] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:56.897 [2024-12-10 23:02:04.564436] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:56.897 [2024-12-10 23:02:04.564686] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:56.897 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.897 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:56.897 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:56.897 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:56.897 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.897 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.897 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:57.157 [2024-12-10 23:02:04.866183] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.416 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:57.674 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:57.674 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:57.932 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:57.932 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:58.191 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:58.192 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:58.763 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:58.763 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:58.763 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:59.022 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:59.022 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:59.588 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:59.588 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:59.846 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:59.846 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:00.105 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:00.364 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:00.364 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:00.623 23:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:00.623 23:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:00.881 23:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.139 [2024-12-10 23:02:08.698349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.139 23:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:01.397 23:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:01.655 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:01.914 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:01.914 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:01.914 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:01.914 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:01.914 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:01.914 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:03.877 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:03.877 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:03.877 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:03.877 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:03.877 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:03.877 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:03.877 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:03.877 [global] 00:31:03.877 thread=1 00:31:03.877 invalidate=1 00:31:03.877 rw=write 00:31:03.877 time_based=1 00:31:03.877 runtime=1 00:31:03.877 ioengine=libaio 00:31:03.877 direct=1 00:31:03.877 bs=4096 00:31:03.877 iodepth=1 00:31:03.877 norandommap=0 00:31:03.877 numjobs=1 00:31:03.877 00:31:03.877 verify_dump=1 00:31:03.877 verify_backlog=512 00:31:03.877 verify_state_save=0 00:31:03.877 do_verify=1 00:31:03.877 verify=crc32c-intel 00:31:03.877 [job0] 00:31:03.877 filename=/dev/nvme0n1 00:31:03.877 [job1] 00:31:03.877 filename=/dev/nvme0n2 00:31:03.877 [job2] 00:31:03.877 filename=/dev/nvme0n3 00:31:03.877 [job3] 00:31:03.877 filename=/dev/nvme0n4 00:31:03.877 Could not set queue depth (nvme0n1) 00:31:03.877 Could not set queue depth (nvme0n2) 00:31:03.877 Could not set queue depth (nvme0n3) 00:31:03.877 Could not set queue depth (nvme0n4) 00:31:04.139 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.139 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.139 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.139 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.139 fio-3.35 00:31:04.139 Starting 4 threads 00:31:05.515 00:31:05.515 job0: (groupid=0, jobs=1): err= 0: pid=224774: Tue Dec 10 23:02:12 2024 00:31:05.515 read: IOPS=480, BW=1921KiB/s (1967kB/s)(1992KiB/1037msec) 00:31:05.515 slat (nsec): min=5990, max=45472, avg=13559.30, stdev=7789.00 00:31:05.515 clat (usec): min=249, max=42088, avg=1851.95, stdev=7804.03 00:31:05.515 lat (usec): min=255, max=42096, avg=1865.51, stdev=7805.58 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:31:05.515 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:31:05.515 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 420], 00:31:05.515 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:05.515 | 99.99th=[42206] 00:31:05.515 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:31:05.515 slat (nsec): min=6156, max=39323, avg=13933.89, stdev=5419.70 00:31:05.515 clat (usec): min=162, max=255, avg=188.01, stdev=10.58 00:31:05.515 lat (usec): min=174, max=287, avg=201.95, stdev=12.33 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:31:05.515 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:05.515 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:31:05.515 | 99.00th=[ 225], 99.50th=[ 239], 99.90th=[ 255], 99.95th=[ 255], 00:31:05.515 | 99.99th=[ 255] 00:31:05.515 bw ( KiB/s): min= 4096, max= 4096, per=20.68%, avg=4096.00, stdev= 0.00, samples=1 00:31:05.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:05.515 lat (usec) : 250=50.79%, 500=47.23%, 1000=0.10% 00:31:05.515 lat (msec) : 50=1.88% 00:31:05.515 cpu : usr=0.77%, sys=1.54%, ctx=1011, majf=0, minf=1 00:31:05.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.515 issued rwts: total=498,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.515 job1: (groupid=0, jobs=1): err= 0: pid=224775: Tue Dec 10 23:02:12 2024 00:31:05.515 read: IOPS=24, BW=98.8KiB/s (101kB/s)(100KiB/1012msec) 00:31:05.515 slat (nsec): min=11243, max=36920, avg=22300.12, stdev=8026.46 00:31:05.515 clat (usec): min=340, max=41304, avg=34480.92, stdev=15184.13 00:31:05.515 lat (usec): min=357, max=41323, avg=34503.22, stdev=15183.56 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 343], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[40633], 00:31:05.515 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:05.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:05.515 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:05.515 | 99.99th=[41157] 00:31:05.515 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:31:05.515 slat (nsec): min=8142, max=54561, avg=20683.15, stdev=7797.84 00:31:05.515 clat (usec): min=181, max=464, avg=265.09, stdev=39.90 00:31:05.515 lat (usec): min=201, max=491, avg=285.77, stdev=37.75 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 235], 00:31:05.515 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 269], 00:31:05.515 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 330], 00:31:05.515 | 99.00th=[ 392], 99.50th=[ 437], 99.90th=[ 465], 99.95th=[ 465], 00:31:05.515 | 99.99th=[ 465] 00:31:05.515 bw ( KiB/s): min= 4096, max= 4096, per=20.68%, avg=4096.00, stdev= 0.00, samples=1 00:31:05.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:05.515 lat (usec) : 250=38.36%, 500=57.73% 00:31:05.515 lat (msec) : 50=3.91% 00:31:05.515 cpu : usr=0.40%, sys=1.68%, ctx=539, majf=0, minf=1 00:31:05.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.515 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.515 job2: (groupid=0, jobs=1): err= 0: pid=224777: Tue Dec 10 23:02:12 2024 00:31:05.515 read: IOPS=1800, BW=7201KiB/s (7374kB/s)(7208KiB/1001msec) 00:31:05.515 slat (nsec): min=6127, max=41878, avg=14194.77, stdev=6260.32 00:31:05.515 clat (usec): min=191, max=621, avg=274.81, stdev=45.43 00:31:05.515 lat (usec): min=208, max=629, avg=289.00, stdev=47.84 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 237], 00:31:05.515 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 273], 00:31:05.515 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 338], 00:31:05.515 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[ 515], 99.95th=[ 619], 00:31:05.515 | 99.99th=[ 619] 00:31:05.515 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:05.515 slat (nsec): min=8086, max=61858, avg=18053.53, stdev=7803.59 00:31:05.515 clat (usec): min=152, max=444, avg=207.79, stdev=43.30 00:31:05.515 lat (usec): min=164, max=486, avg=225.85, stdev=47.12 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:31:05.515 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 198], 60.00th=[ 206], 00:31:05.515 | 70.00th=[ 219], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 289], 00:31:05.515 | 99.00th=[ 363], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 445], 00:31:05.515 | 99.99th=[ 445] 00:31:05.515 bw ( KiB/s): min= 8192, max= 8192, per=41.37%, avg=8192.00, stdev= 0.00, samples=1 00:31:05.515 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:05.515 lat (usec) : 250=60.08%, 500=39.87%, 750=0.05% 00:31:05.515 cpu : usr=4.20%, sys=8.50%, ctx=3851, majf=0, minf=1 00:31:05.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.515 issued rwts: total=1802,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.515 job3: (groupid=0, jobs=1): err= 0: pid=224778: Tue Dec 10 23:02:12 2024 00:31:05.515 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:05.515 slat (nsec): min=4578, max=71724, avg=11197.24, stdev=7025.32 00:31:05.515 clat (usec): min=185, max=1388, avg=256.37, stdev=57.11 00:31:05.515 lat (usec): min=192, max=1402, avg=267.56, stdev=61.14 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 225], 00:31:05.515 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:31:05.515 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 379], 00:31:05.515 | 99.00th=[ 469], 99.50th=[ 482], 99.90th=[ 494], 99.95th=[ 498], 00:31:05.515 | 99.99th=[ 1385] 00:31:05.515 write: IOPS=2059, BW=8240KiB/s (8438kB/s)(8248KiB/1001msec); 0 zone resets 00:31:05.515 slat (nsec): min=6113, max=45925, avg=13746.10, stdev=5773.95 00:31:05.515 clat (usec): min=153, max=3309, avg=198.47, stdev=100.38 00:31:05.515 lat (usec): min=162, max=3329, avg=212.22, stdev=101.52 00:31:05.515 clat percentiles (usec): 00:31:05.515 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:31:05.515 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:31:05.515 | 70.00th=[ 196], 80.00th=[ 212], 90.00th=[ 241], 95.00th=[ 297], 00:31:05.515 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 383], 99.95th=[ 3064], 00:31:05.515 | 99.99th=[ 3294] 00:31:05.515 bw ( KiB/s): min= 8192, max= 8192, per=41.37%, avg=8192.00, stdev= 0.00, samples=1 00:31:05.515 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:05.515 lat (usec) : 250=79.05%, 500=20.88% 00:31:05.515 lat (msec) : 2=0.02%, 4=0.05% 00:31:05.515 cpu : usr=3.50%, sys=5.00%, ctx=4110, majf=0, minf=3 00:31:05.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.516 issued rwts: total=2048,2062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.516 00:31:05.516 Run status group 0 (all jobs): 00:31:05.516 READ: bw=16.5MiB/s (17.3MB/s), 98.8KiB/s-8184KiB/s (101kB/s-8380kB/s), io=17.1MiB (17.9MB), run=1001-1037msec 00:31:05.516 WRITE: bw=19.3MiB/s (20.3MB/s), 1975KiB/s-8240KiB/s (2022kB/s-8438kB/s), io=20.1MiB (21.0MB), run=1001-1037msec 00:31:05.516 00:31:05.516 Disk stats (read/write): 00:31:05.516 nvme0n1: ios=487/512, merge=0/0, ticks=983/95, in_queue=1078, util=85.57% 00:31:05.516 nvme0n2: ios=44/512, merge=0/0, ticks=1606/130, in_queue=1736, util=89.73% 00:31:05.516 nvme0n3: ios=1559/1607, merge=0/0, ticks=1314/334, in_queue=1648, util=93.63% 00:31:05.516 nvme0n4: ios=1593/1939, merge=0/0, ticks=457/376, in_queue=833, util=95.58% 00:31:05.516 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:05.516 [global] 00:31:05.516 thread=1 00:31:05.516 invalidate=1 00:31:05.516 rw=randwrite 00:31:05.516 time_based=1 00:31:05.516 runtime=1 00:31:05.516 ioengine=libaio 00:31:05.516 direct=1 00:31:05.516 bs=4096 00:31:05.516 iodepth=1 00:31:05.516 norandommap=0 00:31:05.516 numjobs=1 00:31:05.516 00:31:05.516 verify_dump=1 00:31:05.516 verify_backlog=512 00:31:05.516 verify_state_save=0 00:31:05.516 do_verify=1 00:31:05.516 verify=crc32c-intel 00:31:05.516 [job0] 00:31:05.516 filename=/dev/nvme0n1 00:31:05.516 [job1] 00:31:05.516 filename=/dev/nvme0n2 00:31:05.516 [job2] 00:31:05.516 filename=/dev/nvme0n3 00:31:05.516 [job3] 00:31:05.516 filename=/dev/nvme0n4 00:31:05.516 Could not set queue depth (nvme0n1) 00:31:05.516 Could not set queue depth (nvme0n2) 00:31:05.516 Could not set queue depth (nvme0n3) 00:31:05.516 Could not set queue depth (nvme0n4) 00:31:05.516 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:05.516 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:05.516 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:05.516 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:05.516 fio-3.35 00:31:05.516 Starting 4 threads 00:31:06.892 00:31:06.892 job0: (groupid=0, jobs=1): err= 0: pid=225011: Tue Dec 10 23:02:14 2024 00:31:06.892 read: IOPS=1291, BW=5165KiB/s (5289kB/s)(5196KiB/1006msec) 00:31:06.892 slat (nsec): min=4857, max=66739, avg=16290.24, stdev=11642.61 00:31:06.892 clat (usec): min=237, max=41427, avg=422.71, stdev=1603.42 00:31:06.892 lat (usec): min=244, max=41451, avg=439.00, stdev=1604.14 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 247], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:31:06.892 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 367], 00:31:06.892 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 469], 00:31:06.892 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[41157], 99.95th=[41681], 00:31:06.892 | 99.99th=[41681] 00:31:06.892 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:31:06.892 slat (nsec): min=6838, max=57479, avg=13515.18, stdev=6781.55 00:31:06.892 clat (usec): min=153, max=1184, avg=260.09, stdev=52.81 00:31:06.892 lat (usec): min=162, max=1192, avg=273.60, stdev=54.88 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 172], 5.00th=[ 202], 10.00th=[ 221], 20.00th=[ 229], 00:31:06.892 | 30.00th=[ 237], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:31:06.892 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 330], 00:31:06.892 | 99.00th=[ 433], 99.50th=[ 469], 99.90th=[ 922], 99.95th=[ 1188], 00:31:06.892 | 99.99th=[ 1188] 00:31:06.892 bw ( KiB/s): min= 4096, max= 8192, per=26.98%, avg=6144.00, stdev=2896.31, samples=2 00:31:06.892 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:31:06.892 lat (usec) : 250=23.42%, 500=75.45%, 750=0.99%, 1000=0.04% 00:31:06.892 lat (msec) : 2=0.04%, 50=0.07% 00:31:06.892 cpu : usr=1.89%, sys=5.17%, ctx=2836, majf=0, minf=2 00:31:06.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 issued rwts: total=1299,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:06.892 job1: (groupid=0, jobs=1): err= 0: pid=225012: Tue Dec 10 23:02:14 2024 00:31:06.892 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:06.892 slat (nsec): min=6050, max=63396, avg=12738.31, stdev=6710.15 00:31:06.892 clat (usec): min=200, max=611, avg=259.91, stdev=42.94 00:31:06.892 lat (usec): min=207, max=640, avg=272.65, stdev=46.60 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:31:06.892 | 30.00th=[ 237], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 269], 00:31:06.892 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:31:06.892 | 99.00th=[ 474], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 611], 00:31:06.892 | 99.99th=[ 611] 00:31:06.892 write: IOPS=2140, BW=8563KiB/s (8769kB/s)(8572KiB/1001msec); 0 zone resets 00:31:06.892 slat (nsec): min=7660, max=58624, avg=14582.22, stdev=7527.45 00:31:06.892 clat (usec): min=143, max=270, avg=182.11, stdev=24.03 00:31:06.892 lat (usec): min=151, max=297, avg=196.70, stdev=29.97 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:31:06.892 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 188], 00:31:06.892 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 227], 00:31:06.892 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 269], 00:31:06.892 | 99.99th=[ 269] 00:31:06.892 bw ( KiB/s): min=10200, max=10200, per=44.79%, avg=10200.00, stdev= 0.00, samples=1 00:31:06.892 iops : min= 2550, max= 2550, avg=2550.00, stdev= 0.00, samples=1 00:31:06.892 lat (usec) : 250=71.13%, 500=28.54%, 750=0.33% 00:31:06.892 cpu : usr=4.20%, sys=7.50%, ctx=4193, majf=0, minf=1 00:31:06.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 issued rwts: total=2048,2143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:06.892 job2: (groupid=0, jobs=1): err= 0: pid=225013: Tue Dec 10 23:02:14 2024 00:31:06.892 read: IOPS=23, BW=95.8KiB/s (98.1kB/s)(96.0KiB/1002msec) 00:31:06.892 slat (nsec): min=5991, max=39831, avg=22847.38, stdev=10481.66 00:31:06.892 clat (usec): min=321, max=41088, avg=37535.26, stdev=11455.47 00:31:06.892 lat (usec): min=337, max=41108, avg=37558.10, stdev=11453.85 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 322], 5.00th=[ 367], 10.00th=[40633], 20.00th=[40633], 00:31:06.892 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:06.892 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:06.892 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:06.892 | 99.99th=[41157] 00:31:06.892 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:31:06.892 slat (nsec): min=6158, max=30507, avg=7333.00, stdev=2152.45 00:31:06.892 clat (usec): min=159, max=276, avg=179.69, stdev=11.55 00:31:06.892 lat (usec): min=165, max=306, avg=187.02, stdev=12.09 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:31:06.892 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:31:06.892 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:31:06.892 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 277], 99.95th=[ 277], 00:31:06.892 | 99.99th=[ 277] 00:31:06.892 bw ( KiB/s): min= 4096, max= 4096, per=17.99%, avg=4096.00, stdev= 0.00, samples=1 00:31:06.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:06.892 lat (usec) : 250=95.34%, 500=0.56% 00:31:06.892 lat (msec) : 50=4.10% 00:31:06.892 cpu : usr=0.10%, sys=0.50%, ctx=537, majf=0, minf=1 00:31:06.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:06.892 job3: (groupid=0, jobs=1): err= 0: pid=225014: Tue Dec 10 23:02:14 2024 00:31:06.892 read: IOPS=1498, BW=5994KiB/s (6138kB/s)(6000KiB/1001msec) 00:31:06.892 slat (nsec): min=6249, max=56839, avg=14527.30, stdev=7557.62 00:31:06.892 clat (usec): min=222, max=729, avg=359.19, stdev=65.21 00:31:06.892 lat (usec): min=229, max=746, avg=373.72, stdev=66.79 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 302], 00:31:06.892 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 371], 00:31:06.892 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 437], 95.00th=[ 482], 00:31:06.892 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 652], 99.95th=[ 725], 00:31:06.892 | 99.99th=[ 725] 00:31:06.892 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:31:06.892 slat (nsec): min=7732, max=76669, avg=15005.07, stdev=7977.13 00:31:06.892 clat (usec): min=170, max=924, avg=260.67, stdev=43.95 00:31:06.892 lat (usec): min=180, max=940, avg=275.68, stdev=47.73 00:31:06.892 clat percentiles (usec): 00:31:06.892 | 1.00th=[ 192], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:31:06.892 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:31:06.892 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 322], 00:31:06.892 | 99.00th=[ 433], 99.50th=[ 465], 99.90th=[ 619], 99.95th=[ 922], 00:31:06.892 | 99.99th=[ 922] 00:31:06.892 bw ( KiB/s): min= 8192, max= 8192, per=35.97%, avg=8192.00, stdev= 0.00, samples=1 00:31:06.892 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:06.892 lat (usec) : 250=20.65%, 500=77.44%, 750=1.88%, 1000=0.03% 00:31:06.892 cpu : usr=3.20%, sys=6.00%, ctx=3038, majf=0, minf=1 00:31:06.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.892 issued rwts: total=1500,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:06.892 00:31:06.893 Run status group 0 (all jobs): 00:31:06.893 READ: bw=18.9MiB/s (19.8MB/s), 95.8KiB/s-8184KiB/s (98.1kB/s-8380kB/s), io=19.0MiB (20.0MB), run=1001-1006msec 00:31:06.893 WRITE: bw=22.2MiB/s (23.3MB/s), 2044KiB/s-8563KiB/s (2093kB/s-8769kB/s), io=22.4MiB (23.5MB), run=1001-1006msec 00:31:06.893 00:31:06.893 Disk stats (read/write): 00:31:06.893 nvme0n1: ios=1152/1536, merge=0/0, ticks=1211/388, in_queue=1599, util=86.07% 00:31:06.893 nvme0n2: ios=1685/2048, merge=0/0, ticks=637/344, in_queue=981, util=91.37% 00:31:06.893 nvme0n3: ios=70/512, merge=0/0, ticks=1460/89, in_queue=1549, util=93.76% 00:31:06.893 nvme0n4: ios=1151/1536, merge=0/0, ticks=493/382, in_queue=875, util=95.28% 00:31:06.893 23:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:06.893 [global] 00:31:06.893 thread=1 00:31:06.893 invalidate=1 00:31:06.893 rw=write 00:31:06.893 time_based=1 00:31:06.893 runtime=1 00:31:06.893 ioengine=libaio 00:31:06.893 direct=1 00:31:06.893 bs=4096 00:31:06.893 iodepth=128 00:31:06.893 norandommap=0 00:31:06.893 numjobs=1 00:31:06.893 00:31:06.893 verify_dump=1 00:31:06.893 verify_backlog=512 00:31:06.893 verify_state_save=0 00:31:06.893 do_verify=1 00:31:06.893 verify=crc32c-intel 00:31:06.893 [job0] 00:31:06.893 filename=/dev/nvme0n1 00:31:06.893 [job1] 00:31:06.893 filename=/dev/nvme0n2 00:31:06.893 [job2] 00:31:06.893 filename=/dev/nvme0n3 00:31:06.893 [job3] 00:31:06.893 filename=/dev/nvme0n4 00:31:06.893 Could not set queue depth (nvme0n1) 00:31:06.893 Could not set queue depth (nvme0n2) 00:31:06.893 Could not set queue depth (nvme0n3) 00:31:06.893 Could not set queue depth (nvme0n4) 00:31:06.893 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:06.893 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:06.893 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:06.893 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:06.893 fio-3.35 00:31:06.893 Starting 4 threads 00:31:08.266 00:31:08.266 job0: (groupid=0, jobs=1): err= 0: pid=225234: Tue Dec 10 23:02:15 2024 00:31:08.266 read: IOPS=5577, BW=21.8MiB/s (22.8MB/s)(21.9MiB/1005msec) 00:31:08.266 slat (usec): min=2, max=10268, avg=87.42, stdev=715.87 00:31:08.266 clat (usec): min=2946, max=22013, avg=10869.47, stdev=2942.79 00:31:08.266 lat (usec): min=3294, max=28559, avg=10956.89, stdev=3014.76 00:31:08.266 clat percentiles (usec): 00:31:08.266 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 8291], 20.00th=[ 8717], 00:31:08.266 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:31:08.266 | 70.00th=[10814], 80.00th=[12387], 90.00th=[15533], 95.00th=[17957], 00:31:08.266 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21365], 99.95th=[21365], 00:31:08.266 | 99.99th=[21890] 00:31:08.266 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:31:08.266 slat (usec): min=4, max=8826, avg=84.03, stdev=517.66 00:31:08.266 clat (usec): min=1880, max=42853, avg=11753.87, stdev=6862.69 00:31:08.266 lat (usec): min=1887, max=42860, avg=11837.90, stdev=6915.70 00:31:08.266 clat percentiles (usec): 00:31:08.266 | 1.00th=[ 3326], 5.00th=[ 5800], 10.00th=[ 7046], 20.00th=[ 8160], 00:31:08.266 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:31:08.266 | 70.00th=[11600], 80.00th=[13042], 90.00th=[15664], 95.00th=[31851], 00:31:08.266 | 99.00th=[39584], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:31:08.266 | 99.99th=[42730] 00:31:08.266 bw ( KiB/s): min=20480, max=24576, per=38.35%, avg=22528.00, stdev=2896.31, samples=2 00:31:08.266 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:31:08.266 lat (msec) : 2=0.04%, 4=1.03%, 10=44.75%, 20=50.00%, 50=4.19% 00:31:08.266 cpu : usr=4.28%, sys=7.37%, ctx=469, majf=0, minf=1 00:31:08.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:08.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.266 issued rwts: total=5605,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.266 job1: (groupid=0, jobs=1): err= 0: pid=225235: Tue Dec 10 23:02:15 2024 00:31:08.266 read: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1011msec) 00:31:08.266 slat (usec): min=2, max=24111, avg=105.41, stdev=934.13 00:31:08.266 clat (usec): min=4047, max=44705, avg=14914.52, stdev=6342.83 00:31:08.266 lat (usec): min=4058, max=44717, avg=15019.93, stdev=6394.83 00:31:08.266 clat percentiles (usec): 00:31:08.266 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[10552], 00:31:08.266 | 30.00th=[11338], 40.00th=[12256], 50.00th=[13173], 60.00th=[14091], 00:31:08.266 | 70.00th=[15139], 80.00th=[20579], 90.00th=[25035], 95.00th=[26870], 00:31:08.266 | 99.00th=[33162], 99.50th=[37487], 99.90th=[40109], 99.95th=[43779], 00:31:08.266 | 99.99th=[44827] 00:31:08.266 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:31:08.266 slat (usec): min=3, max=31735, avg=121.14, stdev=1199.16 00:31:08.266 clat (usec): min=222, max=95802, avg=17706.80, stdev=15655.39 00:31:08.266 lat (usec): min=240, max=103849, avg=17827.94, stdev=15771.28 00:31:08.266 clat percentiles (usec): 00:31:08.266 | 1.00th=[ 1565], 5.00th=[ 3687], 10.00th=[ 9241], 20.00th=[10290], 00:31:08.266 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12125], 60.00th=[12911], 00:31:08.266 | 70.00th=[15270], 80.00th=[21103], 90.00th=[35914], 95.00th=[53216], 00:31:08.266 | 99.00th=[80217], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:31:08.266 | 99.99th=[95945] 00:31:08.266 bw ( KiB/s): min=16376, max=16384, per=27.88%, avg=16380.00, stdev= 5.66, samples=2 00:31:08.266 iops : min= 4094, max= 4096, avg=4095.00, stdev= 1.41, samples=2 00:31:08.266 lat (usec) : 250=0.01%, 500=0.03%, 750=0.09%, 1000=0.18% 00:31:08.266 lat (msec) : 2=1.06%, 4=1.68%, 10=13.74%, 20=61.33%, 50=18.64% 00:31:08.266 lat (msec) : 100=3.24% 00:31:08.266 cpu : usr=2.77%, sys=4.16%, ctx=223, majf=0, minf=1 00:31:08.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:08.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.266 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.266 job2: (groupid=0, jobs=1): err= 0: pid=225236: Tue Dec 10 23:02:15 2024 00:31:08.266 read: IOPS=1725, BW=6903KiB/s (7069kB/s)(6972KiB/1010msec) 00:31:08.266 slat (usec): min=2, max=25870, avg=150.52, stdev=1166.87 00:31:08.266 clat (usec): min=1034, max=81605, avg=22175.52, stdev=11656.44 00:31:08.266 lat (usec): min=7188, max=92977, avg=22326.05, stdev=11737.84 00:31:08.266 clat percentiles (usec): 00:31:08.266 | 1.00th=[11469], 5.00th=[11863], 10.00th=[12518], 20.00th=[12649], 00:31:08.266 | 30.00th=[15270], 40.00th=[16581], 50.00th=[19530], 60.00th=[21103], 00:31:08.266 | 70.00th=[25035], 80.00th=[26608], 90.00th=[35914], 95.00th=[55837], 00:31:08.266 | 99.00th=[56361], 99.50th=[61604], 99.90th=[78119], 99.95th=[81265], 00:31:08.266 | 99.99th=[81265] 00:31:08.266 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:31:08.266 slat (usec): min=3, max=43338, avg=282.74, stdev=2088.62 00:31:08.266 clat (msec): min=2, max=139, avg=42.82, stdev=33.73 00:31:08.266 lat (msec): min=2, max=139, avg=43.10, stdev=33.89 00:31:08.266 clat percentiles (msec): 00:31:08.266 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 14], 00:31:08.266 | 30.00th=[ 19], 40.00th=[ 25], 50.00th=[ 31], 60.00th=[ 41], 00:31:08.266 | 70.00th=[ 55], 80.00th=[ 69], 90.00th=[ 93], 95.00th=[ 124], 00:31:08.266 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:31:08.266 | 99.99th=[ 140] 00:31:08.266 bw ( KiB/s): min= 7366, max= 9032, per=13.96%, avg=8199.00, stdev=1178.04, samples=2 00:31:08.266 iops : min= 1841, max= 2258, avg=2049.50, stdev=294.86, samples=2 00:31:08.266 lat (msec) : 2=0.03%, 4=1.06%, 10=4.96%, 20=36.80%, 50=35.69% 00:31:08.266 lat (msec) : 100=16.86%, 250=4.62% 00:31:08.266 cpu : usr=1.39%, sys=2.58%, ctx=186, majf=0, minf=1 00:31:08.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:31:08.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.266 issued rwts: total=1743,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.266 job3: (groupid=0, jobs=1): err= 0: pid=225237: Tue Dec 10 23:02:15 2024 00:31:08.266 read: IOPS=2585, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1011msec) 00:31:08.266 slat (usec): min=3, max=24477, avg=161.25, stdev=1378.84 00:31:08.266 clat (usec): min=5043, max=49359, avg=19573.39, stdev=6490.05 00:31:08.266 lat (usec): min=5051, max=49374, avg=19734.64, stdev=6592.40 00:31:08.266 clat percentiles (usec): 00:31:08.266 | 1.00th=[ 7504], 5.00th=[11994], 10.00th=[13435], 20.00th=[13829], 00:31:08.266 | 30.00th=[14353], 40.00th=[16581], 50.00th=[17433], 60.00th=[19268], 00:31:08.266 | 70.00th=[23987], 80.00th=[25560], 90.00th=[27132], 95.00th=[28705], 00:31:08.266 | 99.00th=[42206], 99.50th=[45351], 99.90th=[46924], 99.95th=[47973], 00:31:08.266 | 99.99th=[49546] 00:31:08.266 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:31:08.266 slat (usec): min=4, max=19173, avg=181.91, stdev=1169.33 00:31:08.266 clat (usec): min=1190, max=122308, avg=24968.20, stdev=19413.89 00:31:08.266 lat (usec): min=1202, max=122334, avg=25150.11, stdev=19547.52 00:31:08.266 clat percentiles (msec): 00:31:08.266 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 13], 00:31:08.266 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 21], 00:31:08.266 | 70.00th=[ 25], 80.00th=[ 33], 90.00th=[ 51], 95.00th=[ 62], 00:31:08.266 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 123], 99.95th=[ 123], 00:31:08.266 | 99.99th=[ 123] 00:31:08.266 bw ( KiB/s): min=11696, max=12288, per=20.41%, avg=11992.00, stdev=418.61, samples=2 00:31:08.266 iops : min= 2924, max= 3072, avg=2998.00, stdev=104.65, samples=2 00:31:08.266 lat (msec) : 2=0.04%, 10=4.20%, 20=54.52%, 50=35.84%, 100=4.43% 00:31:08.266 lat (msec) : 250=0.97% 00:31:08.266 cpu : usr=1.88%, sys=4.16%, ctx=200, majf=0, minf=1 00:31:08.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:08.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.267 issued rwts: total=2614,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.267 00:31:08.267 Run status group 0 (all jobs): 00:31:08.267 READ: bw=52.8MiB/s (55.4MB/s), 6903KiB/s-21.8MiB/s (7069kB/s-22.8MB/s), io=53.4MiB (56.0MB), run=1005-1011msec 00:31:08.267 WRITE: bw=57.4MiB/s (60.2MB/s), 8111KiB/s-21.9MiB/s (8306kB/s-23.0MB/s), io=58.0MiB (60.8MB), run=1005-1011msec 00:31:08.267 00:31:08.267 Disk stats (read/write): 00:31:08.267 nvme0n1: ios=5145/5158, merge=0/0, ticks=54010/50143, in_queue=104153, util=97.90% 00:31:08.267 nvme0n2: ios=3122/3211, merge=0/0, ticks=35481/43531, in_queue=79012, util=98.07% 00:31:08.267 nvme0n3: ios=1082/1439, merge=0/0, ticks=22490/49404, in_queue=71894, util=98.12% 00:31:08.267 nvme0n4: ios=2618/2711, merge=0/0, ticks=49415/54630, in_queue=104045, util=98.01% 00:31:08.267 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:08.267 [global] 00:31:08.267 thread=1 00:31:08.267 invalidate=1 00:31:08.267 rw=randwrite 00:31:08.267 time_based=1 00:31:08.267 runtime=1 00:31:08.267 ioengine=libaio 00:31:08.267 direct=1 00:31:08.267 bs=4096 00:31:08.267 iodepth=128 00:31:08.267 norandommap=0 00:31:08.267 numjobs=1 00:31:08.267 00:31:08.267 verify_dump=1 00:31:08.267 verify_backlog=512 00:31:08.267 verify_state_save=0 00:31:08.267 do_verify=1 00:31:08.267 verify=crc32c-intel 00:31:08.267 [job0] 00:31:08.267 filename=/dev/nvme0n1 00:31:08.267 [job1] 00:31:08.267 filename=/dev/nvme0n2 00:31:08.267 [job2] 00:31:08.267 filename=/dev/nvme0n3 00:31:08.267 [job3] 00:31:08.267 filename=/dev/nvme0n4 00:31:08.267 Could not set queue depth (nvme0n1) 00:31:08.267 Could not set queue depth (nvme0n2) 00:31:08.267 Could not set queue depth (nvme0n3) 00:31:08.267 Could not set queue depth (nvme0n4) 00:31:08.525 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:08.525 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:08.525 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:08.525 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:08.525 fio-3.35 00:31:08.526 Starting 4 threads 00:31:09.900 00:31:09.900 job0: (groupid=0, jobs=1): err= 0: pid=225581: Tue Dec 10 23:02:17 2024 00:31:09.900 read: IOPS=5767, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1003msec) 00:31:09.900 slat (usec): min=2, max=12924, avg=83.09, stdev=559.60 00:31:09.900 clat (usec): min=2591, max=24956, avg=10789.63, stdev=3088.63 00:31:09.900 lat (usec): min=2593, max=24960, avg=10872.72, stdev=3107.20 00:31:09.900 clat percentiles (usec): 00:31:09.900 | 1.00th=[ 3589], 5.00th=[ 6718], 10.00th=[ 8094], 20.00th=[ 8848], 00:31:09.900 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:31:09.900 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13566], 95.00th=[16581], 00:31:09.900 | 99.00th=[22152], 99.50th=[22152], 99.90th=[24773], 99.95th=[24773], 00:31:09.900 | 99.99th=[25035] 00:31:09.900 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:31:09.900 slat (usec): min=3, max=8153, avg=78.20, stdev=448.04 00:31:09.900 clat (usec): min=3586, max=22714, avg=10508.02, stdev=1913.33 00:31:09.900 lat (usec): min=3590, max=22834, avg=10586.22, stdev=1927.66 00:31:09.900 clat percentiles (usec): 00:31:09.900 | 1.00th=[ 5669], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9503], 00:31:09.900 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:31:09.901 | 70.00th=[11076], 80.00th=[11863], 90.00th=[12649], 95.00th=[13304], 00:31:09.901 | 99.00th=[16319], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:31:09.901 | 99.99th=[22676] 00:31:09.901 bw ( KiB/s): min=24576, max=24576, per=40.51%, avg=24576.00, stdev= 0.00, samples=2 00:31:09.901 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:31:09.901 lat (msec) : 4=0.57%, 10=39.40%, 20=58.24%, 50=1.79% 00:31:09.901 cpu : usr=4.19%, sys=7.68%, ctx=545, majf=0, minf=1 00:31:09.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:09.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:09.901 issued rwts: total=5785,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:09.901 job1: (groupid=0, jobs=1): err= 0: pid=225582: Tue Dec 10 23:02:17 2024 00:31:09.901 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.7MiB/1048msec) 00:31:09.901 slat (usec): min=2, max=16846, avg=115.50, stdev=809.06 00:31:09.901 clat (usec): min=5038, max=77472, avg=16555.79, stdev=11013.33 00:31:09.901 lat (usec): min=5054, max=94319, avg=16671.29, stdev=11075.38 00:31:09.901 clat percentiles (usec): 00:31:09.901 | 1.00th=[ 6718], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10683], 00:31:09.901 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12649], 60.00th=[13566], 00:31:09.901 | 70.00th=[14615], 80.00th=[20055], 90.00th=[29230], 95.00th=[32900], 00:31:09.901 | 99.00th=[67634], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:31:09.901 | 99.99th=[77071] 00:31:09.901 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:31:09.901 slat (usec): min=2, max=10921, avg=102.92, stdev=621.11 00:31:09.901 clat (usec): min=3502, max=66931, avg=13389.55, stdev=5162.73 00:31:09.901 lat (usec): min=3598, max=69181, avg=13492.47, stdev=5206.26 00:31:09.901 clat percentiles (usec): 00:31:09.901 | 1.00th=[ 5407], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10290], 00:31:09.901 | 30.00th=[10945], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:31:09.901 | 70.00th=[13698], 80.00th=[14746], 90.00th=[17433], 95.00th=[22414], 00:31:09.901 | 99.00th=[35914], 99.50th=[38011], 99.90th=[66847], 99.95th=[66847], 00:31:09.901 | 99.99th=[66847] 00:31:09.901 bw ( KiB/s): min=15920, max=20944, per=30.38%, avg=18432.00, stdev=3552.50, samples=2 00:31:09.901 iops : min= 3980, max= 5236, avg=4608.00, stdev=888.13, samples=2 00:31:09.901 lat (msec) : 4=0.01%, 10=13.08%, 20=73.29%, 50=12.10%, 100=1.52% 00:31:09.901 cpu : usr=3.44%, sys=5.54%, ctx=368, majf=0, minf=1 00:31:09.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:09.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:09.901 issued rwts: total=4275,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:09.901 job2: (groupid=0, jobs=1): err= 0: pid=225590: Tue Dec 10 23:02:17 2024 00:31:09.901 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:31:09.901 slat (usec): min=2, max=51153, avg=203.98, stdev=1510.76 00:31:09.901 clat (usec): min=7109, max=87201, avg=26164.74, stdev=15693.71 00:31:09.901 lat (usec): min=7114, max=87206, avg=26368.72, stdev=15775.30 00:31:09.901 clat percentiles (usec): 00:31:09.901 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11863], 20.00th=[15270], 00:31:09.901 | 30.00th=[17433], 40.00th=[19268], 50.00th=[22152], 60.00th=[25822], 00:31:09.901 | 70.00th=[29754], 80.00th=[31851], 90.00th=[39060], 95.00th=[50070], 00:31:09.901 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:31:09.901 | 99.99th=[87557] 00:31:09.901 write: IOPS=2789, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1007msec); 0 zone resets 00:31:09.901 slat (usec): min=3, max=13620, avg=164.14, stdev=1022.63 00:31:09.901 clat (usec): min=561, max=70336, avg=21469.42, stdev=7868.18 00:31:09.901 lat (usec): min=6853, max=70342, avg=21633.56, stdev=7909.32 00:31:09.901 clat percentiles (usec): 00:31:09.901 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12125], 20.00th=[13173], 00:31:09.901 | 30.00th=[17171], 40.00th=[19268], 50.00th=[20579], 60.00th=[23725], 00:31:09.901 | 70.00th=[26870], 80.00th=[27657], 90.00th=[29754], 95.00th=[30802], 00:31:09.901 | 99.00th=[47973], 99.50th=[58459], 99.90th=[70779], 99.95th=[70779], 00:31:09.901 | 99.99th=[70779] 00:31:09.901 bw ( KiB/s): min=10664, max=10784, per=17.68%, avg=10724.00, stdev=84.85, samples=2 00:31:09.901 iops : min= 2666, max= 2696, avg=2681.00, stdev=21.21, samples=2 00:31:09.901 lat (usec) : 750=0.02% 00:31:09.901 lat (msec) : 10=0.86%, 20=44.65%, 50=51.57%, 100=2.91% 00:31:09.901 cpu : usr=2.78%, sys=4.57%, ctx=206, majf=0, minf=1 00:31:09.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:09.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:09.901 issued rwts: total=2560,2809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:09.901 job3: (groupid=0, jobs=1): err= 0: pid=225592: Tue Dec 10 23:02:17 2024 00:31:09.901 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:31:09.901 slat (usec): min=3, max=13198, avg=212.86, stdev=1120.39 00:31:09.901 clat (usec): min=14836, max=52860, avg=25968.77, stdev=5387.83 00:31:09.901 lat (usec): min=15834, max=52868, avg=26181.63, stdev=5457.11 00:31:09.901 clat percentiles (usec): 00:31:09.901 | 1.00th=[16319], 5.00th=[17957], 10.00th=[19530], 20.00th=[21365], 00:31:09.901 | 30.00th=[23200], 40.00th=[24249], 50.00th=[25297], 60.00th=[26870], 00:31:09.901 | 70.00th=[28443], 80.00th=[30016], 90.00th=[31589], 95.00th=[34341], 00:31:09.901 | 99.00th=[44827], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:31:09.901 | 99.99th=[52691] 00:31:09.901 write: IOPS=2316, BW=9267KiB/s (9490kB/s)(9332KiB/1007msec); 0 zone resets 00:31:09.901 slat (usec): min=3, max=13560, avg=234.13, stdev=1146.83 00:31:09.901 clat (usec): min=5072, max=62787, avg=31593.95, stdev=11093.08 00:31:09.901 lat (usec): min=13801, max=62802, avg=31828.08, stdev=11164.86 00:31:09.901 clat percentiles (usec): 00:31:09.901 | 1.00th=[15795], 5.00th=[17957], 10.00th=[18482], 20.00th=[21103], 00:31:09.901 | 30.00th=[26346], 40.00th=[27132], 50.00th=[28967], 60.00th=[30540], 00:31:09.901 | 70.00th=[35914], 80.00th=[40109], 90.00th=[50594], 95.00th=[53216], 00:31:09.901 | 99.00th=[56886], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:31:09.901 | 99.99th=[62653] 00:31:09.901 bw ( KiB/s): min= 8336, max= 9312, per=14.55%, avg=8824.00, stdev=690.14, samples=2 00:31:09.901 iops : min= 2084, max= 2328, avg=2206.00, stdev=172.53, samples=2 00:31:09.901 lat (msec) : 10=0.02%, 20=14.65%, 50=79.18%, 100=6.14% 00:31:09.901 cpu : usr=2.29%, sys=3.78%, ctx=244, majf=0, minf=1 00:31:09.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:31:09.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:09.901 issued rwts: total=2048,2333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:09.901 00:31:09.901 Run status group 0 (all jobs): 00:31:09.901 READ: bw=54.7MiB/s (57.3MB/s), 8135KiB/s-22.5MiB/s (8330kB/s-23.6MB/s), io=57.3MiB (60.1MB), run=1003-1048msec 00:31:09.901 WRITE: bw=59.2MiB/s (62.1MB/s), 9267KiB/s-23.9MiB/s (9490kB/s-25.1MB/s), io=62.1MiB (65.1MB), run=1003-1048msec 00:31:09.901 00:31:09.901 Disk stats (read/write): 00:31:09.901 nvme0n1: ios=4756/5120, merge=0/0, ticks=26840/24776, in_queue=51616, util=93.89% 00:31:09.901 nvme0n2: ios=3927/4120, merge=0/0, ticks=22784/21364, in_queue=44148, util=94.62% 00:31:09.901 nvme0n3: ios=2391/2560, merge=0/0, ticks=20386/22167, in_queue=42553, util=96.04% 00:31:09.901 nvme0n4: ios=1701/2048, merge=0/0, ticks=16323/22706, in_queue=39029, util=96.21% 00:31:09.901 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:09.901 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=225729 00:31:09.901 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:09.901 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:09.901 [global] 00:31:09.901 thread=1 00:31:09.901 invalidate=1 00:31:09.901 rw=read 00:31:09.901 time_based=1 00:31:09.901 runtime=10 00:31:09.901 ioengine=libaio 00:31:09.901 direct=1 00:31:09.901 bs=4096 00:31:09.901 iodepth=1 00:31:09.901 norandommap=1 00:31:09.901 numjobs=1 00:31:09.901 00:31:09.901 [job0] 00:31:09.901 filename=/dev/nvme0n1 00:31:09.901 [job1] 00:31:09.901 filename=/dev/nvme0n2 00:31:09.901 [job2] 00:31:09.901 filename=/dev/nvme0n3 00:31:09.901 [job3] 00:31:09.901 filename=/dev/nvme0n4 00:31:09.901 Could not set queue depth (nvme0n1) 00:31:09.901 Could not set queue depth (nvme0n2) 00:31:09.901 Could not set queue depth (nvme0n3) 00:31:09.901 Could not set queue depth (nvme0n4) 00:31:09.901 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:09.901 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:09.901 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:09.901 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:09.901 fio-3.35 00:31:09.901 Starting 4 threads 00:31:13.187 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:13.187 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:13.187 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=18952192, buflen=4096 00:31:13.187 fio: pid=225821, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:13.445 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:13.445 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:13.445 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=811008, buflen=4096 00:31:13.445 fio: pid=225820, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:13.703 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:13.703 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:13.703 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=372736, buflen=4096 00:31:13.703 fio: pid=225818, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:13.961 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=24195072, buflen=4096 00:31:13.961 fio: pid=225819, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:13.961 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:13.961 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:13.961 00:31:13.961 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=225818: Tue Dec 10 23:02:21 2024 00:31:13.961 read: IOPS=26, BW=103KiB/s (106kB/s)(364KiB/3524msec) 00:31:13.961 slat (usec): min=11, max=8920, avg=114.77, stdev=928.14 00:31:13.961 clat (usec): min=253, max=45044, avg=38349.76, stdev=10149.00 00:31:13.961 lat (usec): min=265, max=50013, avg=38465.50, stdev=10218.49 00:31:13.961 clat percentiles (usec): 00:31:13.961 | 1.00th=[ 253], 5.00th=[ 429], 10.00th=[40633], 20.00th=[41157], 00:31:13.961 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:13.961 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:13.961 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:31:13.961 | 99.99th=[44827] 00:31:13.961 bw ( KiB/s): min= 96, max= 136, per=0.92%, avg=105.33, stdev=15.53, samples=6 00:31:13.961 iops : min= 24, max= 34, avg=26.33, stdev= 3.88, samples=6 00:31:13.961 lat (usec) : 500=5.43%, 1000=1.09% 00:31:13.961 lat (msec) : 50=92.39% 00:31:13.961 cpu : usr=0.00%, sys=0.09%, ctx=94, majf=0, minf=1 00:31:13.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.961 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.961 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:13.961 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=225819: Tue Dec 10 23:02:21 2024 00:31:13.961 read: IOPS=1554, BW=6218KiB/s (6367kB/s)(23.1MiB/3800msec) 00:31:13.961 slat (usec): min=4, max=7894, avg=13.90, stdev=156.47 00:31:13.961 clat (usec): min=167, max=42084, avg=623.15, stdev=3878.71 00:31:13.961 lat (usec): min=178, max=47998, avg=635.72, stdev=3903.59 00:31:13.961 clat percentiles (usec): 00:31:13.961 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:31:13.961 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:31:13.961 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 396], 00:31:13.961 | 99.00th=[ 652], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:13.961 | 99.99th=[42206] 00:31:13.961 bw ( KiB/s): min= 104, max=16264, per=59.16%, avg=6740.86, stdev=5692.76, samples=7 00:31:13.961 iops : min= 26, max= 4066, avg=1685.14, stdev=1423.29, samples=7 00:31:13.961 lat (usec) : 250=72.70%, 500=25.22%, 750=1.13% 00:31:13.961 lat (msec) : 2=0.02%, 50=0.91% 00:31:13.961 cpu : usr=0.97%, sys=1.55%, ctx=5913, majf=0, minf=2 00:31:13.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.961 issued rwts: total=5908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:13.961 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=225820: Tue Dec 10 23:02:21 2024 00:31:13.961 read: IOPS=60, BW=242KiB/s (248kB/s)(792KiB/3268msec) 00:31:13.961 slat (nsec): min=4244, max=36389, avg=14398.18, stdev=7563.60 00:31:13.961 clat (usec): min=213, max=42312, avg=16368.67, stdev=20157.34 00:31:13.961 lat (usec): min=217, max=42320, avg=16383.08, stdev=20157.88 00:31:13.961 clat percentiles (usec): 00:31:13.961 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 249], 20.00th=[ 277], 00:31:13.961 | 30.00th=[ 302], 40.00th=[ 371], 50.00th=[ 396], 60.00th=[ 537], 00:31:13.961 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:13.962 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:13.962 | 99.99th=[42206] 00:31:13.962 bw ( KiB/s): min= 96, max= 600, per=2.25%, avg=256.00, stdev=186.38, samples=6 00:31:13.962 iops : min= 24, max= 150, avg=64.00, stdev=46.60, samples=6 00:31:13.962 lat (usec) : 250=10.55%, 500=47.24%, 750=3.02% 00:31:13.962 lat (msec) : 50=38.69% 00:31:13.962 cpu : usr=0.00%, sys=0.18%, ctx=199, majf=0, minf=2 00:31:13.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.962 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.962 issued rwts: total=199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:13.962 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=225821: Tue Dec 10 23:02:21 2024 00:31:13.962 read: IOPS=1567, BW=6270KiB/s (6420kB/s)(18.1MiB/2952msec) 00:31:13.962 slat (nsec): min=4987, max=60952, avg=11885.35, stdev=7643.14 00:31:13.962 clat (usec): min=173, max=41309, avg=618.23, stdev=3863.74 00:31:13.962 lat (usec): min=184, max=41319, avg=630.12, stdev=3863.80 00:31:13.962 clat percentiles (usec): 00:31:13.962 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:31:13.962 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 243], 00:31:13.962 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[ 355], 00:31:13.962 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:13.962 | 99.99th=[41157] 00:31:13.962 bw ( KiB/s): min= 184, max=10080, per=53.56%, avg=6102.40, stdev=4105.39, samples=5 00:31:13.962 iops : min= 46, max= 2520, avg=1525.60, stdev=1026.35, samples=5 00:31:13.962 lat (usec) : 250=65.73%, 500=32.84%, 750=0.45% 00:31:13.962 lat (msec) : 2=0.04%, 50=0.91% 00:31:13.962 cpu : usr=0.95%, sys=2.58%, ctx=4628, majf=0, minf=1 00:31:13.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.962 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.962 issued rwts: total=4628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:13.962 00:31:13.962 Run status group 0 (all jobs): 00:31:13.962 READ: bw=11.1MiB/s (11.7MB/s), 103KiB/s-6270KiB/s (106kB/s-6420kB/s), io=42.3MiB (44.3MB), run=2952-3800msec 00:31:13.962 00:31:13.962 Disk stats (read/write): 00:31:13.962 nvme0n1: ios=87/0, merge=0/0, ticks=3327/0, in_queue=3327, util=95.22% 00:31:13.962 nvme0n2: ios=5946/0, merge=0/0, ticks=4179/0, in_queue=4179, util=99.09% 00:31:13.962 nvme0n3: ios=194/0, merge=0/0, ticks=3075/0, in_queue=3075, util=96.77% 00:31:13.962 nvme0n4: ios=4624/0, merge=0/0, ticks=2707/0, in_queue=2707, util=96.73% 00:31:14.219 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:14.219 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:14.477 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:14.477 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:14.734 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:14.734 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:14.993 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:14.993 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:15.251 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:15.251 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 225729 00:31:15.251 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:15.251 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:15.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:15.508 nvmf hotplug test: fio failed as expected 00:31:15.508 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.766 rmmod nvme_tcp 00:31:15.766 rmmod nvme_fabrics 00:31:15.766 rmmod nvme_keyring 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 223707 ']' 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 223707 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 223707 ']' 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 223707 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223707 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223707' 00:31:15.766 killing process with pid 223707 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 223707 00:31:15.766 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 223707 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.025 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.026 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.026 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.026 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.026 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:18.567 00:31:18.567 real 0m23.711s 00:31:18.567 user 1m7.676s 00:31:18.567 sys 0m9.661s 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.567 ************************************ 00:31:18.567 END TEST nvmf_fio_target 00:31:18.567 ************************************ 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:18.567 ************************************ 00:31:18.567 START TEST nvmf_bdevio 00:31:18.567 ************************************ 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:18.567 * Looking for test storage... 00:31:18.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.567 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.568 --rc genhtml_branch_coverage=1 00:31:18.568 --rc genhtml_function_coverage=1 00:31:18.568 --rc genhtml_legend=1 00:31:18.568 --rc geninfo_all_blocks=1 00:31:18.568 --rc geninfo_unexecuted_blocks=1 00:31:18.568 00:31:18.568 ' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.568 --rc genhtml_branch_coverage=1 00:31:18.568 --rc genhtml_function_coverage=1 00:31:18.568 --rc genhtml_legend=1 00:31:18.568 --rc geninfo_all_blocks=1 00:31:18.568 --rc geninfo_unexecuted_blocks=1 00:31:18.568 00:31:18.568 ' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.568 --rc genhtml_branch_coverage=1 00:31:18.568 --rc genhtml_function_coverage=1 00:31:18.568 --rc genhtml_legend=1 00:31:18.568 --rc geninfo_all_blocks=1 00:31:18.568 --rc geninfo_unexecuted_blocks=1 00:31:18.568 00:31:18.568 ' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.568 --rc genhtml_branch_coverage=1 00:31:18.568 --rc genhtml_function_coverage=1 00:31:18.568 --rc genhtml_legend=1 00:31:18.568 --rc geninfo_all_blocks=1 00:31:18.568 --rc geninfo_unexecuted_blocks=1 00:31:18.568 00:31:18.568 ' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.568 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.569 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.569 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:18.569 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:18.569 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:18.569 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.472 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:20.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:20.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:20.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:20.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:20.473 00:31:20.473 --- 10.0.0.2 ping statistics --- 00:31:20.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.473 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:31:20.473 00:31:20.473 --- 10.0.0.1 ping statistics --- 00:31:20.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.473 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.473 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.474 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.474 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.474 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.474 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.474 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=228445 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 228445 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 228445 ']' 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.732 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.732 [2024-12-10 23:02:28.264931] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.732 [2024-12-10 23:02:28.266150] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:31:20.732 [2024-12-10 23:02:28.266212] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.732 [2024-12-10 23:02:28.340307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.732 [2024-12-10 23:02:28.398222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.732 [2024-12-10 23:02:28.398278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.732 [2024-12-10 23:02:28.398306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.732 [2024-12-10 23:02:28.398316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.732 [2024-12-10 23:02:28.398325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.732 [2024-12-10 23:02:28.400052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:31:20.732 [2024-12-10 23:02:28.400115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:31:20.732 [2024-12-10 23:02:28.400174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:31:20.732 [2024-12-10 23:02:28.400177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.991 [2024-12-10 23:02:28.490421] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.991 [2024-12-10 23:02:28.490647] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.991 [2024-12-10 23:02:28.490953] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:20.991 [2024-12-10 23:02:28.491622] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.991 [2024-12-10 23:02:28.491855] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.991 [2024-12-10 23:02:28.540896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.991 Malloc0 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.991 [2024-12-10 23:02:28.605145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.991 { 00:31:20.991 "params": { 00:31:20.991 "name": "Nvme$subsystem", 00:31:20.991 "trtype": "$TEST_TRANSPORT", 00:31:20.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.991 "adrfam": "ipv4", 00:31:20.991 "trsvcid": "$NVMF_PORT", 00:31:20.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.991 "hdgst": ${hdgst:-false}, 00:31:20.991 "ddgst": ${ddgst:-false} 00:31:20.991 }, 00:31:20.991 "method": "bdev_nvme_attach_controller" 00:31:20.991 } 00:31:20.991 EOF 00:31:20.991 )") 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:20.991 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.991 "params": { 00:31:20.991 "name": "Nvme1", 00:31:20.991 "trtype": "tcp", 00:31:20.991 "traddr": "10.0.0.2", 00:31:20.991 "adrfam": "ipv4", 00:31:20.991 "trsvcid": "4420", 00:31:20.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.991 "hdgst": false, 00:31:20.991 "ddgst": false 00:31:20.991 }, 00:31:20.991 "method": "bdev_nvme_attach_controller" 00:31:20.991 }' 00:31:20.991 [2024-12-10 23:02:28.658396] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:31:20.991 [2024-12-10 23:02:28.658470] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228590 ] 00:31:21.249 [2024-12-10 23:02:28.729983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:21.249 [2024-12-10 23:02:28.792965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.249 [2024-12-10 23:02:28.793017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.249 [2024-12-10 23:02:28.793021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.507 I/O targets: 00:31:21.507 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:21.507 00:31:21.507 00:31:21.507 CUnit - A unit testing framework for C - Version 2.1-3 00:31:21.507 http://cunit.sourceforge.net/ 00:31:21.507 00:31:21.507 00:31:21.507 Suite: bdevio tests on: Nvme1n1 00:31:21.507 Test: blockdev write read block ...passed 00:31:21.507 Test: blockdev write zeroes read block ...passed 00:31:21.507 Test: blockdev write zeroes read no split ...passed 00:31:21.507 Test: blockdev write zeroes read split ...passed 00:31:21.507 Test: blockdev write zeroes read split partial ...passed 00:31:21.507 Test: blockdev reset ...[2024-12-10 23:02:29.190035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:21.507 [2024-12-10 23:02:29.190137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79e920 (9): Bad file descriptor 00:31:21.507 [2024-12-10 23:02:29.234889] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:21.507 passed 00:31:21.507 Test: blockdev write read 8 blocks ...passed 00:31:21.507 Test: blockdev write read size > 128k ...passed 00:31:21.507 Test: blockdev write read invalid size ...passed 00:31:21.764 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:21.764 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:21.764 Test: blockdev write read max offset ...passed 00:31:21.764 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:21.764 Test: blockdev writev readv 8 blocks ...passed 00:31:21.764 Test: blockdev writev readv 30 x 1block ...passed 00:31:21.764 Test: blockdev writev readv block ...passed 00:31:21.764 Test: blockdev writev readv size > 128k ...passed 00:31:21.764 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:21.764 Test: blockdev comparev and writev ...[2024-12-10 23:02:29.407802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.764 [2024-12-10 23:02:29.407839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.764 [2024-12-10 23:02:29.407864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.764 [2024-12-10 23:02:29.407881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.764 [2024-12-10 23:02:29.408311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.764 [2024-12-10 23:02:29.408335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:21.764 [2024-12-10 23:02:29.408356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.764 [2024-12-10 23:02:29.408372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:21.764 [2024-12-10 23:02:29.408811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.764 [2024-12-10 23:02:29.408835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:21.765 [2024-12-10 23:02:29.408856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.765 [2024-12-10 23:02:29.408872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:21.765 [2024-12-10 23:02:29.409298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.765 [2024-12-10 23:02:29.409321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:21.765 [2024-12-10 23:02:29.409343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:21.765 [2024-12-10 23:02:29.409358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:21.765 passed 00:31:21.765 Test: blockdev nvme passthru rw ...passed 00:31:21.765 Test: blockdev nvme passthru vendor specific ...[2024-12-10 23:02:29.492821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:21.765 [2024-12-10 23:02:29.492852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:21.765 [2024-12-10 23:02:29.493031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:21.765 [2024-12-10 23:02:29.493055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:21.765 [2024-12-10 23:02:29.493229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:21.765 [2024-12-10 23:02:29.493252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:21.765 [2024-12-10 23:02:29.493421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:21.765 [2024-12-10 23:02:29.493443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:21.765 passed 00:31:22.023 Test: blockdev nvme admin passthru ...passed 00:31:22.023 Test: blockdev copy ...passed 00:31:22.023 00:31:22.023 Run Summary: Type Total Ran Passed Failed Inactive 00:31:22.023 suites 1 1 n/a 0 0 00:31:22.023 tests 23 23 23 0 0 00:31:22.023 asserts 152 152 152 0 n/a 00:31:22.023 00:31:22.023 Elapsed time = 0.934 seconds 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:22.023 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.281 rmmod nvme_tcp 00:31:22.281 rmmod nvme_fabrics 00:31:22.281 rmmod nvme_keyring 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 228445 ']' 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 228445 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 228445 ']' 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 228445 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228445 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228445' 00:31:22.281 killing process with pid 228445 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 228445 00:31:22.281 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 228445 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.540 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.448 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.448 00:31:24.448 real 0m6.448s 00:31:24.448 user 0m8.362s 00:31:24.448 sys 0m2.535s 00:31:24.448 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.448 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:24.448 ************************************ 00:31:24.448 END TEST nvmf_bdevio 00:31:24.448 ************************************ 00:31:24.707 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:24.707 00:31:24.707 real 3m54.518s 00:31:24.707 user 8m51.994s 00:31:24.707 sys 1m23.911s 00:31:24.707 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.707 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:24.707 ************************************ 00:31:24.707 END TEST nvmf_target_core_interrupt_mode 00:31:24.707 ************************************ 00:31:24.707 23:02:32 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:24.707 23:02:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.707 23:02:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.707 23:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.707 ************************************ 00:31:24.707 START TEST nvmf_interrupt 00:31:24.707 ************************************ 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:24.707 * Looking for test storage... 00:31:24.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:24.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.707 --rc genhtml_branch_coverage=1 00:31:24.707 --rc genhtml_function_coverage=1 00:31:24.707 --rc genhtml_legend=1 00:31:24.707 --rc geninfo_all_blocks=1 00:31:24.707 --rc geninfo_unexecuted_blocks=1 00:31:24.707 00:31:24.707 ' 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:24.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.707 --rc genhtml_branch_coverage=1 00:31:24.707 --rc genhtml_function_coverage=1 00:31:24.707 --rc genhtml_legend=1 00:31:24.707 --rc geninfo_all_blocks=1 00:31:24.707 --rc geninfo_unexecuted_blocks=1 00:31:24.707 00:31:24.707 ' 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:24.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.707 --rc genhtml_branch_coverage=1 00:31:24.707 --rc genhtml_function_coverage=1 00:31:24.707 --rc genhtml_legend=1 00:31:24.707 --rc geninfo_all_blocks=1 00:31:24.707 --rc geninfo_unexecuted_blocks=1 00:31:24.707 00:31:24.707 ' 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:24.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.707 --rc genhtml_branch_coverage=1 00:31:24.707 --rc genhtml_function_coverage=1 00:31:24.707 --rc genhtml_legend=1 00:31:24.707 --rc geninfo_all_blocks=1 00:31:24.707 --rc geninfo_unexecuted_blocks=1 00:31:24.707 00:31:24.707 ' 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.707 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.708 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.966 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:24.966 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:24.966 23:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.966 23:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:26.868 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:26.868 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:26.868 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.868 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:26.869 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.869 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:27.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:31:27.128 00:31:27.128 --- 10.0.0.2 ping statistics --- 00:31:27.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.128 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:31:27.128 00:31:27.128 --- 10.0.0.1 ping statistics --- 00:31:27.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.128 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=230685 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 230685 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 230685 ']' 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.128 23:02:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.128 [2024-12-10 23:02:34.799444] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:27.128 [2024-12-10 23:02:34.800591] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:31:27.128 [2024-12-10 23:02:34.800666] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.386 [2024-12-10 23:02:34.877062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:27.386 [2024-12-10 23:02:34.938978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.386 [2024-12-10 23:02:34.939029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.386 [2024-12-10 23:02:34.939044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.386 [2024-12-10 23:02:34.939056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.386 [2024-12-10 23:02:34.939067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.386 [2024-12-10 23:02:34.940596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.386 [2024-12-10 23:02:34.940602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.386 [2024-12-10 23:02:35.041182] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.386 [2024-12-10 23:02:35.041233] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:27.386 [2024-12-10 23:02:35.041440] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:27.386 5000+0 records in 00:31:27.386 5000+0 records out 00:31:27.386 10240000 bytes (10 MB, 9.8 MiB) copied, 0.014672 s, 698 MB/s 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.386 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.645 AIO0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.645 [2024-12-10 23:02:35.161213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:27.645 [2024-12-10 23:02:35.185452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 230685 0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 230685 0 idle 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230685 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.29 reactor_0' 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230685 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.29 reactor_0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 230685 1 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 230685 1 idle 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:27.645 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230690 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.00 reactor_1' 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230690 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.00 reactor_1 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=230849 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 230685 0 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 230685 0 busy 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:27.903 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230685 root 20 0 128.2g 49152 35328 R 99.9 0.1 0:00.49 reactor_0' 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230685 root 20 0 128.2g 49152 35328 R 99.9 0.1 0:00.49 reactor_0 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 230685 1 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 230685 1 busy 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230690 root 20 0 128.2g 49152 35328 R 93.3 0.1 0:00.25 reactor_1' 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230690 root 20 0 128.2g 49152 35328 R 93.3 0.1 0:00.25 reactor_1 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:28.162 23:02:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 230849 00:31:38.131 Initializing NVMe Controllers 00:31:38.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.131 Controller IO queue size 256, less than required. 00:31:38.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:38.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:38.131 Initialization complete. Launching workers. 00:31:38.131 ======================================================== 00:31:38.131 Latency(us) 00:31:38.131 Device Information : IOPS MiB/s Average min max 00:31:38.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13780.93 53.83 18589.82 4288.77 22724.85 00:31:38.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13181.54 51.49 19435.67 3933.14 23575.43 00:31:38.131 ======================================================== 00:31:38.131 Total : 26962.47 105.32 19003.34 3933.14 23575.43 00:31:38.131 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 230685 0 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 230685 0 idle 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230685 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:20.23 reactor_0' 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230685 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:20.23 reactor_0 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 230685 1 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 230685 1 idle 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:38.131 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230690 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:09.98 reactor_1' 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230690 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:09.98 reactor_1 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:38.408 23:02:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:38.675 23:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:38.675 23:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:31:38.675 23:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:38.675 23:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:38.675 23:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 230685 0 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 230685 0 idle 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:40.574 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:40.832 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230685 root 20 0 128.2g 61440 35328 S 0.0 0.1 0:20.33 reactor_0' 00:31:40.832 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230685 root 20 0 128.2g 61440 35328 S 0.0 0.1 0:20.33 reactor_0 00:31:40.832 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 230685 1 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 230685 1 idle 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=230685 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 230685 -w 256 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 230690 root 20 0 128.2g 61440 35328 S 0.0 0.1 0:10.01 reactor_1' 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 230690 root 20 0 128.2g 61440 35328 S 0.0 0.1 0:10.01 reactor_1 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:40.833 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:41.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:41.091 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:41.091 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:31:41.091 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:41.091 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:41.349 rmmod nvme_tcp 00:31:41.349 rmmod nvme_fabrics 00:31:41.349 rmmod nvme_keyring 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 230685 ']' 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 230685 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 230685 ']' 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 230685 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 230685 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 230685' 00:31:41.349 killing process with pid 230685 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 230685 00:31:41.349 23:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 230685 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:41.609 23:02:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.515 23:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:43.515 00:31:43.515 real 0m18.962s 00:31:43.515 user 0m36.934s 00:31:43.515 sys 0m6.692s 00:31:43.515 23:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.515 23:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:43.515 ************************************ 00:31:43.515 END TEST nvmf_interrupt 00:31:43.515 ************************************ 00:31:43.515 00:31:43.515 real 24m59.557s 00:31:43.515 user 58m19.877s 00:31:43.515 sys 6m37.572s 00:31:43.515 23:02:51 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.515 23:02:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.515 ************************************ 00:31:43.515 END TEST nvmf_tcp 00:31:43.515 ************************************ 00:31:43.773 23:02:51 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:31:43.773 23:02:51 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:43.773 23:02:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:43.773 23:02:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:43.773 23:02:51 -- common/autotest_common.sh@10 -- # set +x 00:31:43.773 ************************************ 00:31:43.773 START TEST spdkcli_nvmf_tcp 00:31:43.773 ************************************ 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:43.773 * Looking for test storage... 00:31:43.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.773 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:43.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.774 --rc genhtml_branch_coverage=1 00:31:43.774 --rc genhtml_function_coverage=1 00:31:43.774 --rc genhtml_legend=1 00:31:43.774 --rc geninfo_all_blocks=1 00:31:43.774 --rc geninfo_unexecuted_blocks=1 00:31:43.774 00:31:43.774 ' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:43.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.774 --rc genhtml_branch_coverage=1 00:31:43.774 --rc genhtml_function_coverage=1 00:31:43.774 --rc genhtml_legend=1 00:31:43.774 --rc geninfo_all_blocks=1 00:31:43.774 --rc geninfo_unexecuted_blocks=1 00:31:43.774 00:31:43.774 ' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:43.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.774 --rc genhtml_branch_coverage=1 00:31:43.774 --rc genhtml_function_coverage=1 00:31:43.774 --rc genhtml_legend=1 00:31:43.774 --rc geninfo_all_blocks=1 00:31:43.774 --rc geninfo_unexecuted_blocks=1 00:31:43.774 00:31:43.774 ' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:43.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.774 --rc genhtml_branch_coverage=1 00:31:43.774 --rc genhtml_function_coverage=1 00:31:43.774 --rc genhtml_legend=1 00:31:43.774 --rc geninfo_all_blocks=1 00:31:43.774 --rc geninfo_unexecuted_blocks=1 00:31:43.774 00:31:43.774 ' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:43.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=232849 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 232849 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 232849 ']' 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.774 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.774 [2024-12-10 23:02:51.486051] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:31:43.774 [2024-12-10 23:02:51.486133] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232849 ] 00:31:44.033 [2024-12-10 23:02:51.552803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:44.033 [2024-12-10 23:02:51.612087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.033 [2024-12-10 23:02:51.612091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:44.033 23:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:44.033 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:44.033 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:44.033 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:44.033 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:44.033 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:44.033 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:44.033 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:44.033 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:44.033 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:44.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:44.033 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:44.033 ' 00:31:47.312 [2024-12-10 23:02:54.381177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.255 [2024-12-10 23:02:55.653584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:50.777 [2024-12-10 23:02:58.004938] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:52.672 [2024-12-10 23:03:00.019176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:54.042 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:54.042 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:54.042 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:54.042 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:54.042 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:54.042 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:54.042 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:54.042 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:54.042 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:54.042 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:54.042 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:54.042 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:54.042 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:54.042 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:54.042 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:54.042 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:54.042 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:54.043 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:54.043 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:54.043 23:03:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.607 23:03:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:54.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:54.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:54.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:54.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:54.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:54.607 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:54.607 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:54.607 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:54.607 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:54.607 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:54.607 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:54.607 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:54.607 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:54.607 ' 00:31:59.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:59.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:59.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:59.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:59.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:59.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:59.863 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:59.863 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:59.863 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:59.863 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:59.863 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:59.863 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:59.863 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:59.863 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 232849 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 232849 ']' 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 232849 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 232849 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 232849' 00:32:00.121 killing process with pid 232849 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 232849 00:32:00.121 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 232849 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 232849 ']' 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 232849 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 232849 ']' 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 232849 00:32:00.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (232849) - No such process 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 232849 is not found' 00:32:00.379 Process with pid 232849 is not found 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:00.379 00:32:00.379 real 0m16.638s 00:32:00.379 user 0m35.496s 00:32:00.379 sys 0m0.729s 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.379 23:03:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.379 ************************************ 00:32:00.379 END TEST spdkcli_nvmf_tcp 00:32:00.379 ************************************ 00:32:00.379 23:03:07 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:00.379 23:03:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:00.379 23:03:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.379 23:03:07 -- common/autotest_common.sh@10 -- # set +x 00:32:00.379 ************************************ 00:32:00.379 START TEST nvmf_identify_passthru 00:32:00.379 ************************************ 00:32:00.379 23:03:07 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:00.379 * Looking for test storage... 00:32:00.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.379 23:03:08 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:00.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.379 --rc genhtml_branch_coverage=1 00:32:00.379 --rc genhtml_function_coverage=1 00:32:00.379 --rc genhtml_legend=1 00:32:00.379 --rc geninfo_all_blocks=1 00:32:00.379 --rc geninfo_unexecuted_blocks=1 00:32:00.379 00:32:00.379 ' 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:00.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.379 --rc genhtml_branch_coverage=1 00:32:00.379 --rc genhtml_function_coverage=1 00:32:00.379 --rc genhtml_legend=1 00:32:00.379 --rc geninfo_all_blocks=1 00:32:00.379 --rc geninfo_unexecuted_blocks=1 00:32:00.379 00:32:00.379 ' 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:00.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.379 --rc genhtml_branch_coverage=1 00:32:00.379 --rc genhtml_function_coverage=1 00:32:00.379 --rc genhtml_legend=1 00:32:00.379 --rc geninfo_all_blocks=1 00:32:00.379 --rc geninfo_unexecuted_blocks=1 00:32:00.379 00:32:00.379 ' 00:32:00.379 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:00.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.379 --rc genhtml_branch_coverage=1 00:32:00.380 --rc genhtml_function_coverage=1 00:32:00.380 --rc genhtml_legend=1 00:32:00.380 --rc geninfo_all_blocks=1 00:32:00.380 --rc geninfo_unexecuted_blocks=1 00:32:00.380 00:32:00.380 ' 00:32:00.380 23:03:08 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.380 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:00.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.638 23:03:08 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.638 23:03:08 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:00.638 23:03:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.638 23:03:08 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.638 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:00.638 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.638 23:03:08 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.638 23:03:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:02.538 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:02.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:02.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:02.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:02.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:02.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:32:02.539 00:32:02.539 --- 10.0.0.2 ping statistics --- 00:32:02.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.539 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:32:02.539 00:32:02.539 --- 10.0.0.1 ping statistics --- 00:32:02.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.539 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:02.539 23:03:10 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:02.539 23:03:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.539 23:03:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:02.539 23:03:10 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:32:02.539 23:03:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:32:02.539 23:03:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:32:02.540 23:03:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:32:02.540 23:03:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:02.540 23:03:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:07.801 23:03:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:32:07.801 23:03:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:32:07.801 23:03:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:07.801 23:03:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:11.077 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:11.077 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.077 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.077 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=237995 00:32:11.077 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:11.077 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:11.077 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 237995 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 237995 ']' 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.077 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.077 [2024-12-10 23:03:18.751932] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:11.077 [2024-12-10 23:03:18.752031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.335 [2024-12-10 23:03:18.827811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:11.335 [2024-12-10 23:03:18.889710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.335 [2024-12-10 23:03:18.889773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.335 [2024-12-10 23:03:18.889802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.335 [2024-12-10 23:03:18.889818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.335 [2024-12-10 23:03:18.889830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.335 [2024-12-10 23:03:18.891433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.335 [2024-12-10 23:03:18.891498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:11.335 [2024-12-10 23:03:18.891571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:11.335 [2024-12-10 23:03:18.891575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.335 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.335 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:11.335 23:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:11.335 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.335 23:03:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.335 INFO: Log level set to 20 00:32:11.335 INFO: Requests: 00:32:11.335 { 00:32:11.335 "jsonrpc": "2.0", 00:32:11.335 "method": "nvmf_set_config", 00:32:11.335 "id": 1, 00:32:11.335 "params": { 00:32:11.335 "admin_cmd_passthru": { 00:32:11.335 "identify_ctrlr": true 00:32:11.335 } 00:32:11.335 } 00:32:11.335 } 00:32:11.335 00:32:11.335 INFO: response: 00:32:11.335 { 00:32:11.335 "jsonrpc": "2.0", 00:32:11.335 "id": 1, 00:32:11.335 "result": true 00:32:11.335 } 00:32:11.335 00:32:11.335 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.335 23:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:11.335 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.335 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.335 INFO: Setting log level to 20 00:32:11.335 INFO: Setting log level to 20 00:32:11.335 INFO: Log level set to 20 00:32:11.335 INFO: Log level set to 20 00:32:11.335 INFO: Requests: 00:32:11.335 { 00:32:11.335 "jsonrpc": "2.0", 00:32:11.335 "method": "framework_start_init", 00:32:11.335 "id": 1 00:32:11.335 } 00:32:11.335 00:32:11.335 INFO: Requests: 00:32:11.335 { 00:32:11.335 "jsonrpc": "2.0", 00:32:11.335 "method": "framework_start_init", 00:32:11.335 "id": 1 00:32:11.335 } 00:32:11.335 00:32:11.593 [2024-12-10 23:03:19.099919] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:11.593 INFO: response: 00:32:11.593 { 00:32:11.593 "jsonrpc": "2.0", 00:32:11.593 "id": 1, 00:32:11.593 "result": true 00:32:11.593 } 00:32:11.593 00:32:11.593 INFO: response: 00:32:11.593 { 00:32:11.593 "jsonrpc": "2.0", 00:32:11.593 "id": 1, 00:32:11.593 "result": true 00:32:11.593 } 00:32:11.593 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.593 23:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.593 INFO: Setting log level to 40 00:32:11.593 INFO: Setting log level to 40 00:32:11.593 INFO: Setting log level to 40 00:32:11.593 [2024-12-10 23:03:19.109955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.593 23:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.593 23:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.593 23:03:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:14.869 Nvme0n1 00:32:14.869 23:03:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.869 23:03:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:14.869 23:03:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.869 23:03:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:14.869 23:03:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.869 23:03:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:14.869 23:03:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.869 23:03:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:14.869 [2024-12-10 23:03:22.009748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:14.869 [ 00:32:14.869 { 00:32:14.869 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:14.869 "subtype": "Discovery", 00:32:14.869 "listen_addresses": [], 00:32:14.869 "allow_any_host": true, 00:32:14.869 "hosts": [] 00:32:14.869 }, 00:32:14.869 { 00:32:14.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:14.869 "subtype": "NVMe", 00:32:14.869 "listen_addresses": [ 00:32:14.869 { 00:32:14.869 "trtype": "TCP", 00:32:14.869 "adrfam": "IPv4", 00:32:14.869 "traddr": "10.0.0.2", 00:32:14.869 "trsvcid": "4420" 00:32:14.869 } 00:32:14.869 ], 00:32:14.869 "allow_any_host": true, 00:32:14.869 "hosts": [], 00:32:14.869 "serial_number": "SPDK00000000000001", 00:32:14.869 "model_number": "SPDK bdev Controller", 00:32:14.869 "max_namespaces": 1, 00:32:14.869 "min_cntlid": 1, 00:32:14.869 "max_cntlid": 65519, 00:32:14.869 "namespaces": [ 00:32:14.869 { 00:32:14.869 "nsid": 1, 00:32:14.869 "bdev_name": "Nvme0n1", 00:32:14.869 "name": "Nvme0n1", 00:32:14.869 "nguid": "585A661523644464AFC84B96ABFE391E", 00:32:14.869 "uuid": "585a6615-2364-4464-afc8-4b96abfe391e" 00:32:14.869 } 00:32:14.869 ] 00:32:14.869 } 00:32:14.869 ] 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:14.869 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:14.869 23:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:14.869 rmmod nvme_tcp 00:32:14.869 rmmod nvme_fabrics 00:32:14.869 rmmod nvme_keyring 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:14.869 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:14.870 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 237995 ']' 00:32:14.870 23:03:22 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 237995 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 237995 ']' 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 237995 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 237995 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 237995' 00:32:14.870 killing process with pid 237995 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 237995 00:32:14.870 23:03:22 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 237995 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.833 23:03:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.833 23:03:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:16.833 23:03:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.739 23:03:26 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.739 00:32:18.739 real 0m18.096s 00:32:18.739 user 0m26.289s 00:32:18.739 sys 0m3.058s 00:32:18.739 23:03:26 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.739 23:03:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:18.739 ************************************ 00:32:18.739 END TEST nvmf_identify_passthru 00:32:18.739 ************************************ 00:32:18.739 23:03:26 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:18.739 23:03:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:18.739 23:03:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.739 23:03:26 -- common/autotest_common.sh@10 -- # set +x 00:32:18.739 ************************************ 00:32:18.739 START TEST nvmf_dif 00:32:18.739 ************************************ 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:18.739 * Looking for test storage... 00:32:18.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.739 23:03:26 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:18.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.739 --rc genhtml_branch_coverage=1 00:32:18.739 --rc genhtml_function_coverage=1 00:32:18.739 --rc genhtml_legend=1 00:32:18.739 --rc geninfo_all_blocks=1 00:32:18.739 --rc geninfo_unexecuted_blocks=1 00:32:18.739 00:32:18.739 ' 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:18.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.739 --rc genhtml_branch_coverage=1 00:32:18.739 --rc genhtml_function_coverage=1 00:32:18.739 --rc genhtml_legend=1 00:32:18.739 --rc geninfo_all_blocks=1 00:32:18.739 --rc geninfo_unexecuted_blocks=1 00:32:18.739 00:32:18.739 ' 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:18.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.739 --rc genhtml_branch_coverage=1 00:32:18.739 --rc genhtml_function_coverage=1 00:32:18.739 --rc genhtml_legend=1 00:32:18.739 --rc geninfo_all_blocks=1 00:32:18.739 --rc geninfo_unexecuted_blocks=1 00:32:18.739 00:32:18.739 ' 00:32:18.739 23:03:26 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:18.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.739 --rc genhtml_branch_coverage=1 00:32:18.739 --rc genhtml_function_coverage=1 00:32:18.739 --rc genhtml_legend=1 00:32:18.739 --rc geninfo_all_blocks=1 00:32:18.739 --rc geninfo_unexecuted_blocks=1 00:32:18.739 00:32:18.739 ' 00:32:18.740 23:03:26 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.740 23:03:26 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.740 23:03:26 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.740 23:03:26 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.740 23:03:26 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.740 23:03:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.740 23:03:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.740 23:03:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.740 23:03:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:18.740 23:03:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:18.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.740 23:03:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:18.740 23:03:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:18.740 23:03:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:18.740 23:03:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:18.740 23:03:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.740 23:03:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:18.740 23:03:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:18.740 23:03:26 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.740 23:03:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:20.643 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:20.643 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:20.643 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:20.643 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.643 23:03:28 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:32:20.903 00:32:20.903 --- 10.0.0.2 ping statistics --- 00:32:20.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.903 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:32:20.903 00:32:20.903 --- 10.0.0.1 ping statistics --- 00:32:20.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.903 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:20.903 23:03:28 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:22.282 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:22.282 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:22.282 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:22.282 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:22.282 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:22.282 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:22.282 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:22.282 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:22.282 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:22.282 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:22.282 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:22.282 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:22.282 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:22.282 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:22.282 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:22.282 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:22.282 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.282 23:03:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:22.282 23:03:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=241253 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:22.282 23:03:29 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 241253 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 241253 ']' 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.282 23:03:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.282 [2024-12-10 23:03:29.883919] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:22.282 [2024-12-10 23:03:29.884002] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.282 [2024-12-10 23:03:29.953355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.282 [2024-12-10 23:03:30.009872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.282 [2024-12-10 23:03:30.009926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.282 [2024-12-10 23:03:30.009959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.282 [2024-12-10 23:03:30.009978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.282 [2024-12-10 23:03:30.009990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.282 [2024-12-10 23:03:30.010557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:22.539 23:03:30 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.539 23:03:30 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.539 23:03:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:22.539 23:03:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.539 [2024-12-10 23:03:30.151822] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.539 23:03:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.539 23:03:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.539 ************************************ 00:32:22.539 START TEST fio_dif_1_default 00:32:22.539 ************************************ 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.539 bdev_null0 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.539 [2024-12-10 23:03:30.208154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:22.539 { 00:32:22.539 "params": { 00:32:22.539 "name": "Nvme$subsystem", 00:32:22.539 "trtype": "$TEST_TRANSPORT", 00:32:22.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.539 "adrfam": "ipv4", 00:32:22.539 "trsvcid": "$NVMF_PORT", 00:32:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.539 "hdgst": ${hdgst:-false}, 00:32:22.539 "ddgst": ${ddgst:-false} 00:32:22.539 }, 00:32:22.539 "method": "bdev_nvme_attach_controller" 00:32:22.539 } 00:32:22.539 EOF 00:32:22.539 )") 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:22.539 "params": { 00:32:22.539 "name": "Nvme0", 00:32:22.539 "trtype": "tcp", 00:32:22.539 "traddr": "10.0.0.2", 00:32:22.539 "adrfam": "ipv4", 00:32:22.539 "trsvcid": "4420", 00:32:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.539 "hdgst": false, 00:32:22.539 "ddgst": false 00:32:22.539 }, 00:32:22.539 "method": "bdev_nvme_attach_controller" 00:32:22.539 }' 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:22.539 23:03:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.795 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:22.795 fio-3.35 00:32:22.795 Starting 1 thread 00:32:34.984 00:32:34.984 filename0: (groupid=0, jobs=1): err= 0: pid=241481: Tue Dec 10 23:03:41 2024 00:32:34.984 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10013msec) 00:32:34.984 slat (nsec): min=4043, max=99275, avg=9593.09, stdev=4089.59 00:32:34.984 clat (usec): min=634, max=47314, avg=40670.39, stdev=3647.67 00:32:34.984 lat (usec): min=642, max=47359, avg=40679.98, stdev=3647.73 00:32:34.984 clat percentiles (usec): 00:32:34.984 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:34.984 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:34.984 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:34.984 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:32:34.984 | 99.99th=[47449] 00:32:34.984 bw ( KiB/s): min= 384, max= 448, per=99.72%, avg=392.00, stdev=17.60, samples=20 00:32:34.984 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:32:34.984 lat (usec) : 750=0.81% 00:32:34.984 lat (msec) : 50=99.19% 00:32:34.985 cpu : usr=90.98%, sys=8.74%, ctx=14, majf=0, minf=192 00:32:34.985 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.985 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.985 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:34.985 00:32:34.985 Run status group 0 (all jobs): 00:32:34.985 READ: bw=393KiB/s (403kB/s), 393KiB/s-393KiB/s (403kB/s-403kB/s), io=3936KiB (4030kB), run=10013-10013msec 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 00:32:34.985 real 0m11.229s 00:32:34.985 user 0m10.205s 00:32:34.985 sys 0m1.195s 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 ************************************ 00:32:34.985 END TEST fio_dif_1_default 00:32:34.985 ************************************ 00:32:34.985 23:03:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:34.985 23:03:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:34.985 23:03:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 ************************************ 00:32:34.985 START TEST fio_dif_1_multi_subsystems 00:32:34.985 ************************************ 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 bdev_null0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 [2024-12-10 23:03:41.492573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 bdev_null1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:34.985 { 00:32:34.985 "params": { 00:32:34.985 "name": "Nvme$subsystem", 00:32:34.985 "trtype": "$TEST_TRANSPORT", 00:32:34.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.985 "adrfam": "ipv4", 00:32:34.985 "trsvcid": "$NVMF_PORT", 00:32:34.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.985 "hdgst": ${hdgst:-false}, 00:32:34.985 "ddgst": ${ddgst:-false} 00:32:34.985 }, 00:32:34.985 "method": "bdev_nvme_attach_controller" 00:32:34.985 } 00:32:34.985 EOF 00:32:34.985 )") 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:34.985 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:34.986 { 00:32:34.986 "params": { 00:32:34.986 "name": "Nvme$subsystem", 00:32:34.986 "trtype": "$TEST_TRANSPORT", 00:32:34.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.986 "adrfam": "ipv4", 00:32:34.986 "trsvcid": "$NVMF_PORT", 00:32:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.986 "hdgst": ${hdgst:-false}, 00:32:34.986 "ddgst": ${ddgst:-false} 00:32:34.986 }, 00:32:34.986 "method": "bdev_nvme_attach_controller" 00:32:34.986 } 00:32:34.986 EOF 00:32:34.986 )") 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:34.986 "params": { 00:32:34.986 "name": "Nvme0", 00:32:34.986 "trtype": "tcp", 00:32:34.986 "traddr": "10.0.0.2", 00:32:34.986 "adrfam": "ipv4", 00:32:34.986 "trsvcid": "4420", 00:32:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:34.986 "hdgst": false, 00:32:34.986 "ddgst": false 00:32:34.986 }, 00:32:34.986 "method": "bdev_nvme_attach_controller" 00:32:34.986 },{ 00:32:34.986 "params": { 00:32:34.986 "name": "Nvme1", 00:32:34.986 "trtype": "tcp", 00:32:34.986 "traddr": "10.0.0.2", 00:32:34.986 "adrfam": "ipv4", 00:32:34.986 "trsvcid": "4420", 00:32:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:34.986 "hdgst": false, 00:32:34.986 "ddgst": false 00:32:34.986 }, 00:32:34.986 "method": "bdev_nvme_attach_controller" 00:32:34.986 }' 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:34.986 23:03:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:34.986 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:34.986 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:34.986 fio-3.35 00:32:34.986 Starting 2 threads 00:32:44.950 00:32:44.950 filename0: (groupid=0, jobs=1): err= 0: pid=242904: Tue Dec 10 23:03:52 2024 00:32:44.950 read: IOPS=205, BW=821KiB/s (841kB/s)(8240KiB/10032msec) 00:32:44.950 slat (nsec): min=7182, max=84506, avg=8785.44, stdev=3009.92 00:32:44.950 clat (usec): min=523, max=42745, avg=19451.54, stdev=20291.13 00:32:44.950 lat (usec): min=530, max=42758, avg=19460.32, stdev=20290.97 00:32:44.950 clat percentiles (usec): 00:32:44.950 | 1.00th=[ 553], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 611], 00:32:44.950 | 30.00th=[ 627], 40.00th=[ 660], 50.00th=[ 791], 60.00th=[41157], 00:32:44.950 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:44.950 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:44.950 | 99.99th=[42730] 00:32:44.950 bw ( KiB/s): min= 704, max= 1088, per=50.43%, avg=822.40, stdev=89.37, samples=20 00:32:44.950 iops : min= 176, max= 272, avg=205.60, stdev=22.34, samples=20 00:32:44.950 lat (usec) : 750=49.61%, 1000=3.30% 00:32:44.950 lat (msec) : 2=0.87%, 50=46.21% 00:32:44.950 cpu : usr=95.43%, sys=4.26%, ctx=12, majf=0, minf=123 00:32:44.950 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.950 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.950 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:44.950 filename1: (groupid=0, jobs=1): err= 0: pid=242905: Tue Dec 10 23:03:52 2024 00:32:44.950 read: IOPS=202, BW=809KiB/s (828kB/s)(8112KiB/10033msec) 00:32:44.950 slat (nsec): min=7216, max=40802, avg=8840.93, stdev=2404.89 00:32:44.950 clat (usec): min=513, max=43127, avg=19760.98, stdev=20311.96 00:32:44.950 lat (usec): min=521, max=43141, avg=19769.82, stdev=20311.74 00:32:44.950 clat percentiles (usec): 00:32:44.950 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 627], 00:32:44.950 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 898], 60.00th=[41157], 00:32:44.950 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:44.950 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:32:44.950 | 99.99th=[43254] 00:32:44.950 bw ( KiB/s): min= 704, max= 896, per=49.64%, avg=809.60, stdev=52.01, samples=20 00:32:44.950 iops : min= 176, max= 224, avg=202.40, stdev=13.00, samples=20 00:32:44.950 lat (usec) : 750=48.96%, 1000=3.06% 00:32:44.950 lat (msec) : 2=1.04%, 50=46.94% 00:32:44.950 cpu : usr=95.62%, sys=4.06%, ctx=23, majf=0, minf=190 00:32:44.950 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.950 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.950 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:44.950 00:32:44.950 Run status group 0 (all jobs): 00:32:44.950 READ: bw=1630KiB/s (1669kB/s), 809KiB/s-821KiB/s (828kB/s-841kB/s), io=16.0MiB (16.7MB), run=10032-10033msec 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 00:32:45.209 real 0m11.351s 00:32:45.209 user 0m20.535s 00:32:45.209 sys 0m1.151s 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 ************************************ 00:32:45.209 END TEST fio_dif_1_multi_subsystems 00:32:45.209 ************************************ 00:32:45.209 23:03:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:45.209 23:03:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:45.209 23:03:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 ************************************ 00:32:45.209 START TEST fio_dif_rand_params 00:32:45.209 ************************************ 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 bdev_null0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.209 [2024-12-10 23:03:52.888733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.209 { 00:32:45.209 "params": { 00:32:45.209 "name": "Nvme$subsystem", 00:32:45.209 "trtype": "$TEST_TRANSPORT", 00:32:45.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.209 "adrfam": "ipv4", 00:32:45.209 "trsvcid": "$NVMF_PORT", 00:32:45.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.209 "hdgst": ${hdgst:-false}, 00:32:45.209 "ddgst": ${ddgst:-false} 00:32:45.209 }, 00:32:45.209 "method": "bdev_nvme_attach_controller" 00:32:45.209 } 00:32:45.209 EOF 00:32:45.209 )") 00:32:45.209 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.210 "params": { 00:32:45.210 "name": "Nvme0", 00:32:45.210 "trtype": "tcp", 00:32:45.210 "traddr": "10.0.0.2", 00:32:45.210 "adrfam": "ipv4", 00:32:45.210 "trsvcid": "4420", 00:32:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.210 "hdgst": false, 00:32:45.210 "ddgst": false 00:32:45.210 }, 00:32:45.210 "method": "bdev_nvme_attach_controller" 00:32:45.210 }' 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:45.210 23:03:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.467 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:45.467 ... 00:32:45.467 fio-3.35 00:32:45.467 Starting 3 threads 00:32:52.024 00:32:52.024 filename0: (groupid=0, jobs=1): err= 0: pid=244295: Tue Dec 10 23:03:58 2024 00:32:52.024 read: IOPS=233, BW=29.1MiB/s (30.5MB/s)(146MiB/5007msec) 00:32:52.024 slat (nsec): min=5183, max=58229, avg=17048.24, stdev=6526.56 00:32:52.024 clat (usec): min=7041, max=48711, avg=12847.56, stdev=3123.64 00:32:52.024 lat (usec): min=7056, max=48723, avg=12864.60, stdev=3123.27 00:32:52.024 clat percentiles (usec): 00:32:52.024 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:32:52.024 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[13173], 00:32:52.024 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15270], 95.00th=[15795], 00:32:52.024 | 99.00th=[17171], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:32:52.024 | 99.99th=[48497] 00:32:52.024 bw ( KiB/s): min=27136, max=33280, per=35.68%, avg=29798.40, stdev=1901.23, samples=10 00:32:52.024 iops : min= 212, max= 260, avg=232.80, stdev=14.85, samples=10 00:32:52.024 lat (msec) : 10=5.91%, 20=93.57%, 50=0.51% 00:32:52.024 cpu : usr=94.65%, sys=4.81%, ctx=13, majf=0, minf=145 00:32:52.024 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.024 issued rwts: total=1167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.024 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.024 filename0: (groupid=0, jobs=1): err= 0: pid=244296: Tue Dec 10 23:03:58 2024 00:32:52.024 read: IOPS=217, BW=27.2MiB/s (28.6MB/s)(138MiB/5047msec) 00:32:52.024 slat (nsec): min=5836, max=41495, avg=14977.51, stdev=4349.02 00:32:52.024 clat (usec): min=4373, max=54656, avg=13706.79, stdev=3535.44 00:32:52.024 lat (usec): min=4384, max=54674, avg=13721.77, stdev=3535.62 00:32:52.024 clat percentiles (usec): 00:32:52.024 | 1.00th=[ 5407], 5.00th=[ 9110], 10.00th=[10945], 20.00th=[11863], 00:32:52.024 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13829], 60.00th=[14353], 00:32:52.024 | 70.00th=[14877], 80.00th=[15533], 90.00th=[16188], 95.00th=[16909], 00:32:52.024 | 99.00th=[18482], 99.50th=[19268], 99.90th=[54789], 99.95th=[54789], 00:32:52.024 | 99.99th=[54789] 00:32:52.024 bw ( KiB/s): min=24576, max=35328, per=33.62%, avg=28083.20, stdev=2956.16, samples=10 00:32:52.024 iops : min= 192, max= 276, avg=219.40, stdev=23.09, samples=10 00:32:52.024 lat (msec) : 10=7.27%, 20=92.27%, 50=0.09%, 100=0.36% 00:32:52.024 cpu : usr=93.26%, sys=6.24%, ctx=13, majf=0, minf=108 00:32:52.024 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.024 issued rwts: total=1100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.024 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.024 filename0: (groupid=0, jobs=1): err= 0: pid=244297: Tue Dec 10 23:03:58 2024 00:32:52.024 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(128MiB/5048msec) 00:32:52.024 slat (nsec): min=5897, max=44541, avg=14575.76, stdev=4086.65 00:32:52.024 clat (usec): min=6800, max=57659, avg=14683.85, stdev=5361.14 00:32:52.024 lat (usec): min=6808, max=57673, avg=14698.43, stdev=5360.95 00:32:52.024 clat percentiles (usec): 00:32:52.024 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[11469], 20.00th=[12387], 00:32:52.024 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14091], 60.00th=[14746], 00:32:52.024 | 70.00th=[15270], 80.00th=[15926], 90.00th=[16712], 95.00th=[17433], 00:32:52.024 | 99.00th=[52167], 99.50th=[53216], 99.90th=[57410], 99.95th=[57410], 00:32:52.024 | 99.99th=[57410] 00:32:52.024 bw ( KiB/s): min=17664, max=28928, per=31.38%, avg=26214.40, stdev=3165.86, samples=10 00:32:52.024 iops : min= 138, max= 226, avg=204.80, stdev=24.73, samples=10 00:32:52.024 lat (msec) : 10=1.66%, 20=96.69%, 50=0.29%, 100=1.36% 00:32:52.024 cpu : usr=93.86%, sys=5.67%, ctx=8, majf=0, minf=86 00:32:52.024 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.024 issued rwts: total=1027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.024 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.024 00:32:52.024 Run status group 0 (all jobs): 00:32:52.024 READ: bw=81.6MiB/s (85.5MB/s), 25.4MiB/s-29.1MiB/s (26.7MB/s-30.5MB/s), io=412MiB (432MB), run=5007-5048msec 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.024 bdev_null0 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.024 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 [2024-12-10 23:03:59.142509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 bdev_null1 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 bdev_null2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.025 { 00:32:52.025 "params": { 00:32:52.025 "name": "Nvme$subsystem", 00:32:52.025 "trtype": "$TEST_TRANSPORT", 00:32:52.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.025 "adrfam": "ipv4", 00:32:52.025 "trsvcid": "$NVMF_PORT", 00:32:52.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.025 "hdgst": ${hdgst:-false}, 00:32:52.025 "ddgst": ${ddgst:-false} 00:32:52.025 }, 00:32:52.025 "method": "bdev_nvme_attach_controller" 00:32:52.025 } 00:32:52.025 EOF 00:32:52.025 )") 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.025 { 00:32:52.025 "params": { 00:32:52.025 "name": "Nvme$subsystem", 00:32:52.025 "trtype": "$TEST_TRANSPORT", 00:32:52.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.025 "adrfam": "ipv4", 00:32:52.025 "trsvcid": "$NVMF_PORT", 00:32:52.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.025 "hdgst": ${hdgst:-false}, 00:32:52.025 "ddgst": ${ddgst:-false} 00:32:52.025 }, 00:32:52.025 "method": "bdev_nvme_attach_controller" 00:32:52.025 } 00:32:52.025 EOF 00:32:52.025 )") 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.025 { 00:32:52.025 "params": { 00:32:52.025 "name": "Nvme$subsystem", 00:32:52.025 "trtype": "$TEST_TRANSPORT", 00:32:52.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.025 "adrfam": "ipv4", 00:32:52.025 "trsvcid": "$NVMF_PORT", 00:32:52.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.025 "hdgst": ${hdgst:-false}, 00:32:52.025 "ddgst": ${ddgst:-false} 00:32:52.025 }, 00:32:52.025 "method": "bdev_nvme_attach_controller" 00:32:52.025 } 00:32:52.025 EOF 00:32:52.025 )") 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:52.025 23:03:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:52.025 "params": { 00:32:52.025 "name": "Nvme0", 00:32:52.025 "trtype": "tcp", 00:32:52.025 "traddr": "10.0.0.2", 00:32:52.025 "adrfam": "ipv4", 00:32:52.025 "trsvcid": "4420", 00:32:52.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.026 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:52.026 "hdgst": false, 00:32:52.026 "ddgst": false 00:32:52.026 }, 00:32:52.026 "method": "bdev_nvme_attach_controller" 00:32:52.026 },{ 00:32:52.026 "params": { 00:32:52.026 "name": "Nvme1", 00:32:52.026 "trtype": "tcp", 00:32:52.026 "traddr": "10.0.0.2", 00:32:52.026 "adrfam": "ipv4", 00:32:52.026 "trsvcid": "4420", 00:32:52.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:52.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:52.026 "hdgst": false, 00:32:52.026 "ddgst": false 00:32:52.026 }, 00:32:52.026 "method": "bdev_nvme_attach_controller" 00:32:52.026 },{ 00:32:52.026 "params": { 00:32:52.026 "name": "Nvme2", 00:32:52.026 "trtype": "tcp", 00:32:52.026 "traddr": "10.0.0.2", 00:32:52.026 "adrfam": "ipv4", 00:32:52.026 "trsvcid": "4420", 00:32:52.026 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:52.026 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:52.026 "hdgst": false, 00:32:52.026 "ddgst": false 00:32:52.026 }, 00:32:52.026 "method": "bdev_nvme_attach_controller" 00:32:52.026 }' 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:52.026 23:03:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.026 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.026 ... 00:32:52.026 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.026 ... 00:32:52.026 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.026 ... 00:32:52.026 fio-3.35 00:32:52.026 Starting 24 threads 00:33:04.228 00:33:04.228 filename0: (groupid=0, jobs=1): err= 0: pid=245155: Tue Dec 10 23:04:10 2024 00:33:04.228 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10107msec) 00:33:04.228 slat (usec): min=8, max=101, avg=52.36, stdev=28.39 00:33:04.228 clat (msec): min=154, max=416, avg=296.80, stdev=53.16 00:33:04.228 lat (msec): min=154, max=416, avg=296.86, stdev=53.15 00:33:04.228 clat percentiles (msec): 00:33:04.228 | 1.00th=[ 190], 5.00th=[ 199], 10.00th=[ 228], 20.00th=[ 257], 00:33:04.228 | 30.00th=[ 268], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 313], 00:33:04.228 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 405], 00:33:04.228 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:33:04.228 | 99.99th=[ 418] 00:33:04.228 bw ( KiB/s): min= 128, max= 384, per=3.51%, avg=211.20, stdev=69.95, samples=20 00:33:04.228 iops : min= 32, max= 96, avg=52.80, stdev=17.49, samples=20 00:33:04.228 lat (msec) : 250=16.36%, 500=83.64% 00:33:04.228 cpu : usr=98.09%, sys=1.27%, ctx=115, majf=0, minf=9 00:33:04.228 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:04.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.228 filename0: (groupid=0, jobs=1): err= 0: pid=245156: Tue Dec 10 23:04:10 2024 00:33:04.228 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10112msec) 00:33:04.228 slat (usec): min=4, max=109, avg=55.76, stdev=29.39 00:33:04.228 clat (msec): min=115, max=520, avg=280.39, stdev=47.94 00:33:04.228 lat (msec): min=115, max=520, avg=280.45, stdev=47.95 00:33:04.228 clat percentiles (msec): 00:33:04.228 | 1.00th=[ 169], 5.00th=[ 199], 10.00th=[ 234], 20.00th=[ 241], 00:33:04.228 | 30.00th=[ 251], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 296], 00:33:04.228 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 347], 00:33:04.228 | 99.00th=[ 388], 99.50th=[ 439], 99.90th=[ 523], 99.95th=[ 523], 00:33:04.228 | 99.99th=[ 523] 00:33:04.228 bw ( KiB/s): min= 128, max= 368, per=3.71%, avg=224.00, stdev=69.06, samples=20 00:33:04.228 iops : min= 32, max= 92, avg=56.00, stdev=17.27, samples=20 00:33:04.228 lat (msec) : 250=29.17%, 500=70.49%, 750=0.35% 00:33:04.228 cpu : usr=98.14%, sys=1.26%, ctx=38, majf=0, minf=9 00:33:04.228 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:33:04.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.228 filename0: (groupid=0, jobs=1): err= 0: pid=245157: Tue Dec 10 23:04:10 2024 00:33:04.228 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10122msec) 00:33:04.228 slat (usec): min=27, max=125, avg=73.94, stdev=13.96 00:33:04.228 clat (msec): min=132, max=498, avg=297.07, stdev=61.59 00:33:04.228 lat (msec): min=132, max=498, avg=297.14, stdev=61.59 00:33:04.228 clat percentiles (msec): 00:33:04.228 | 1.00th=[ 150], 5.00th=[ 192], 10.00th=[ 203], 20.00th=[ 255], 00:33:04.228 | 30.00th=[ 266], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 313], 00:33:04.228 | 70.00th=[ 326], 80.00th=[ 342], 90.00th=[ 384], 95.00th=[ 405], 00:33:04.228 | 99.00th=[ 435], 99.50th=[ 477], 99.90th=[ 498], 99.95th=[ 498], 00:33:04.228 | 99.99th=[ 498] 00:33:04.228 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=211.20, stdev=59.55, samples=20 00:33:04.228 iops : min= 32, max= 64, avg=52.80, stdev=14.89, samples=20 00:33:04.228 lat (msec) : 250=16.91%, 500=83.09% 00:33:04.228 cpu : usr=97.77%, sys=1.52%, ctx=107, majf=0, minf=9 00:33:04.228 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:33:04.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.228 filename0: (groupid=0, jobs=1): err= 0: pid=245158: Tue Dec 10 23:04:10 2024 00:33:04.228 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10124msec) 00:33:04.228 slat (nsec): min=8272, max=94801, avg=36660.75, stdev=24309.68 00:33:04.228 clat (msec): min=161, max=357, avg=266.08, stdev=42.88 00:33:04.228 lat (msec): min=161, max=357, avg=266.12, stdev=42.88 00:33:04.228 clat percentiles (msec): 00:33:04.228 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 213], 20.00th=[ 222], 00:33:04.228 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 271], 00:33:04.228 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 330], 95.00th=[ 334], 00:33:04.228 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:33:04.228 | 99.99th=[ 359] 00:33:04.228 bw ( KiB/s): min= 128, max= 256, per=3.93%, avg=236.80, stdev=46.89, samples=20 00:33:04.228 iops : min= 32, max= 64, avg=59.20, stdev=11.72, samples=20 00:33:04.228 lat (msec) : 250=34.21%, 500=65.79% 00:33:04.228 cpu : usr=98.03%, sys=1.44%, ctx=41, majf=0, minf=9 00:33:04.228 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:04.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.228 filename0: (groupid=0, jobs=1): err= 0: pid=245159: Tue Dec 10 23:04:10 2024 00:33:04.228 read: IOPS=57, BW=230KiB/s (236kB/s)(2328KiB/10107msec) 00:33:04.228 slat (usec): min=8, max=118, avg=53.55, stdev=28.89 00:33:04.228 clat (msec): min=132, max=467, avg=277.19, stdev=55.70 00:33:04.228 lat (msec): min=132, max=467, avg=277.25, stdev=55.71 00:33:04.228 clat percentiles (msec): 00:33:04.228 | 1.00th=[ 159], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 236], 00:33:04.228 | 30.00th=[ 249], 40.00th=[ 259], 50.00th=[ 275], 60.00th=[ 296], 00:33:04.228 | 70.00th=[ 305], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 384], 00:33:04.228 | 99.00th=[ 439], 99.50th=[ 447], 99.90th=[ 468], 99.95th=[ 468], 00:33:04.228 | 99.99th=[ 468] 00:33:04.228 bw ( KiB/s): min= 128, max= 256, per=3.76%, avg=226.40, stdev=48.77, samples=20 00:33:04.228 iops : min= 32, max= 64, avg=56.60, stdev=12.19, samples=20 00:33:04.228 lat (msec) : 250=30.24%, 500=69.76% 00:33:04.228 cpu : usr=98.23%, sys=1.24%, ctx=35, majf=0, minf=9 00:33:04.228 IO depths : 1=3.3%, 2=8.9%, 4=23.2%, 8=55.3%, 16=9.3%, 32=0.0%, >=64=0.0% 00:33:04.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.228 filename0: (groupid=0, jobs=1): err= 0: pid=245160: Tue Dec 10 23:04:10 2024 00:33:04.228 read: IOPS=68, BW=274KiB/s (280kB/s)(2776KiB/10145msec) 00:33:04.228 slat (nsec): min=4894, max=94121, avg=34315.92, stdev=27171.57 00:33:04.228 clat (msec): min=36, max=379, avg=233.30, stdev=60.90 00:33:04.228 lat (msec): min=36, max=379, avg=233.33, stdev=60.91 00:33:04.228 clat percentiles (msec): 00:33:04.228 | 1.00th=[ 37], 5.00th=[ 121], 10.00th=[ 171], 20.00th=[ 209], 00:33:04.228 | 30.00th=[ 218], 40.00th=[ 236], 50.00th=[ 241], 60.00th=[ 253], 00:33:04.228 | 70.00th=[ 255], 80.00th=[ 271], 90.00th=[ 296], 95.00th=[ 305], 00:33:04.228 | 99.00th=[ 351], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:33:04.228 | 99.99th=[ 380] 00:33:04.228 bw ( KiB/s): min= 144, max= 496, per=4.51%, avg=271.20, stdev=66.78, samples=20 00:33:04.228 iops : min= 36, max= 124, avg=67.80, stdev=16.69, samples=20 00:33:04.228 lat (msec) : 50=4.61%, 250=51.01%, 500=44.38% 00:33:04.228 cpu : usr=98.39%, sys=1.19%, ctx=14, majf=0, minf=9 00:33:04.228 IO depths : 1=3.5%, 2=8.8%, 4=22.2%, 8=56.5%, 16=9.1%, 32=0.0%, >=64=0.0% 00:33:04.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.228 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.228 filename0: (groupid=0, jobs=1): err= 0: pid=245161: Tue Dec 10 23:04:10 2024 00:33:04.228 read: IOPS=70, BW=280KiB/s (287kB/s)(2840KiB/10135msec) 00:33:04.228 slat (nsec): min=8274, max=83563, avg=21357.07, stdev=16709.50 00:33:04.228 clat (msec): min=161, max=409, avg=228.01, stdev=36.88 00:33:04.228 lat (msec): min=161, max=409, avg=228.03, stdev=36.88 00:33:04.228 clat percentiles (msec): 00:33:04.228 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 192], 00:33:04.228 | 30.00th=[ 205], 40.00th=[ 218], 50.00th=[ 230], 60.00th=[ 236], 00:33:04.229 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 275], 95.00th=[ 296], 00:33:04.229 | 99.00th=[ 334], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 409], 00:33:04.229 | 99.99th=[ 409] 00:33:04.229 bw ( KiB/s): min= 240, max= 384, per=4.61%, avg=277.60, stdev=40.63, samples=20 00:33:04.229 iops : min= 60, max= 96, avg=69.40, stdev=10.16, samples=20 00:33:04.229 lat (msec) : 250=72.39%, 500=27.61% 00:33:04.229 cpu : usr=98.31%, sys=1.26%, ctx=23, majf=0, minf=9 00:33:04.229 IO depths : 1=2.1%, 2=5.5%, 4=16.3%, 8=65.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=91.6%, 8=2.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename0: (groupid=0, jobs=1): err= 0: pid=245162: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=68, BW=276KiB/s (282kB/s)(2792KiB/10132msec) 00:33:04.229 slat (usec): min=8, max=100, avg=24.67, stdev=24.07 00:33:04.229 clat (msec): min=82, max=421, avg=231.25, stdev=43.96 00:33:04.229 lat (msec): min=82, max=421, avg=231.28, stdev=43.96 00:33:04.229 clat percentiles (msec): 00:33:04.229 | 1.00th=[ 130], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 197], 00:33:04.229 | 30.00th=[ 209], 40.00th=[ 213], 50.00th=[ 226], 60.00th=[ 241], 00:33:04.229 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 292], 95.00th=[ 313], 00:33:04.229 | 99.00th=[ 388], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:33:04.229 | 99.99th=[ 422] 00:33:04.229 bw ( KiB/s): min= 176, max= 384, per=4.52%, avg=272.80, stdev=43.58, samples=20 00:33:04.229 iops : min= 44, max= 96, avg=68.20, stdev=10.89, samples=20 00:33:04.229 lat (msec) : 100=0.29%, 250=68.77%, 500=30.95% 00:33:04.229 cpu : usr=98.30%, sys=1.25%, ctx=20, majf=0, minf=9 00:33:04.229 IO depths : 1=1.0%, 2=3.0%, 4=11.9%, 8=72.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=90.3%, 8=4.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename1: (groupid=0, jobs=1): err= 0: pid=245163: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10135msec) 00:33:04.229 slat (usec): min=8, max=111, avg=46.57, stdev=28.04 00:33:04.229 clat (msec): min=75, max=490, avg=246.79, stdev=74.89 00:33:04.229 lat (msec): min=75, max=490, avg=246.84, stdev=74.91 00:33:04.229 clat percentiles (msec): 00:33:04.229 | 1.00th=[ 75], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 182], 00:33:04.229 | 30.00th=[ 197], 40.00th=[ 224], 50.00th=[ 251], 60.00th=[ 271], 00:33:04.229 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 338], 95.00th=[ 351], 00:33:04.229 | 99.00th=[ 422], 99.50th=[ 477], 99.90th=[ 489], 99.95th=[ 489], 00:33:04.229 | 99.99th=[ 489] 00:33:04.229 bw ( KiB/s): min= 128, max= 384, per=4.24%, avg=256.00, stdev=80.75, samples=20 00:33:04.229 iops : min= 32, max= 96, avg=64.00, stdev=20.19, samples=20 00:33:04.229 lat (msec) : 100=2.44%, 250=46.34%, 500=51.22% 00:33:04.229 cpu : usr=98.16%, sys=1.29%, ctx=96, majf=0, minf=9 00:33:04.229 IO depths : 1=2.7%, 2=8.8%, 4=24.5%, 8=54.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename1: (groupid=0, jobs=1): err= 0: pid=245164: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10127msec) 00:33:04.229 slat (nsec): min=4094, max=96566, avg=48627.93, stdev=27785.31 00:33:04.229 clat (msec): min=191, max=417, avg=273.28, stdev=43.37 00:33:04.229 lat (msec): min=191, max=417, avg=273.33, stdev=43.38 00:33:04.229 clat percentiles (msec): 00:33:04.229 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 213], 20.00th=[ 241], 00:33:04.229 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 268], 60.00th=[ 275], 00:33:04.229 | 70.00th=[ 296], 80.00th=[ 317], 90.00th=[ 334], 95.00th=[ 338], 00:33:04.229 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 418], 99.95th=[ 418], 00:33:04.229 | 99.99th=[ 418] 00:33:04.229 bw ( KiB/s): min= 128, max= 256, per=3.83%, avg=230.40, stdev=52.53, samples=20 00:33:04.229 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:33:04.229 lat (msec) : 250=25.34%, 500=74.66% 00:33:04.229 cpu : usr=98.29%, sys=1.26%, ctx=19, majf=0, minf=9 00:33:04.229 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename1: (groupid=0, jobs=1): err= 0: pid=245165: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10115msec) 00:33:04.229 slat (nsec): min=4204, max=85507, avg=29655.06, stdev=18537.65 00:33:04.229 clat (msec): min=175, max=357, avg=265.92, stdev=45.35 00:33:04.229 lat (msec): min=175, max=357, avg=265.95, stdev=45.34 00:33:04.229 clat percentiles (msec): 00:33:04.229 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 226], 00:33:04.229 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 275], 00:33:04.229 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 334], 95.00th=[ 347], 00:33:04.229 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:33:04.229 | 99.99th=[ 359] 00:33:04.229 bw ( KiB/s): min= 128, max= 272, per=3.93%, avg=236.80, stdev=47.18, samples=20 00:33:04.229 iops : min= 32, max= 68, avg=59.20, stdev=11.79, samples=20 00:33:04.229 lat (msec) : 250=37.17%, 500=62.83% 00:33:04.229 cpu : usr=98.29%, sys=1.32%, ctx=40, majf=0, minf=10 00:33:04.229 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename1: (groupid=0, jobs=1): err= 0: pid=245166: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10108msec) 00:33:04.229 slat (usec): min=8, max=110, avg=34.84, stdev=26.77 00:33:04.229 clat (msec): min=152, max=499, avg=288.49, stdev=63.33 00:33:04.229 lat (msec): min=152, max=499, avg=288.52, stdev=63.32 00:33:04.229 clat percentiles (msec): 00:33:04.229 | 1.00th=[ 157], 5.00th=[ 192], 10.00th=[ 199], 20.00th=[ 236], 00:33:04.229 | 30.00th=[ 251], 40.00th=[ 271], 50.00th=[ 292], 60.00th=[ 305], 00:33:04.229 | 70.00th=[ 321], 80.00th=[ 338], 90.00th=[ 368], 95.00th=[ 388], 00:33:04.229 | 99.00th=[ 481], 99.50th=[ 489], 99.90th=[ 502], 99.95th=[ 502], 00:33:04.229 | 99.99th=[ 502] 00:33:04.229 bw ( KiB/s): min= 128, max= 384, per=3.61%, avg=217.60, stdev=70.49, samples=20 00:33:04.229 iops : min= 32, max= 96, avg=54.40, stdev=17.62, samples=20 00:33:04.229 lat (msec) : 250=25.71%, 500=74.29% 00:33:04.229 cpu : usr=98.12%, sys=1.31%, ctx=26, majf=0, minf=9 00:33:04.229 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename1: (groupid=0, jobs=1): err= 0: pid=245167: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10107msec) 00:33:04.229 slat (nsec): min=8444, max=89223, avg=22844.78, stdev=13836.28 00:33:04.229 clat (msec): min=152, max=434, avg=272.93, stdev=52.78 00:33:04.229 lat (msec): min=152, max=434, avg=272.95, stdev=52.78 00:33:04.229 clat percentiles (msec): 00:33:04.229 | 1.00th=[ 153], 5.00th=[ 182], 10.00th=[ 197], 20.00th=[ 232], 00:33:04.229 | 30.00th=[ 249], 40.00th=[ 257], 50.00th=[ 268], 60.00th=[ 296], 00:33:04.229 | 70.00th=[ 305], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 347], 00:33:04.229 | 99.00th=[ 380], 99.50th=[ 409], 99.90th=[ 435], 99.95th=[ 435], 00:33:04.229 | 99.99th=[ 435] 00:33:04.229 bw ( KiB/s): min= 128, max= 384, per=3.83%, avg=230.40, stdev=65.54, samples=20 00:33:04.229 iops : min= 32, max= 96, avg=57.60, stdev=16.38, samples=20 00:33:04.229 lat (msec) : 250=32.09%, 500=67.91% 00:33:04.229 cpu : usr=98.14%, sys=1.40%, ctx=41, majf=0, minf=9 00:33:04.229 IO depths : 1=4.7%, 2=10.8%, 4=24.5%, 8=52.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename1: (groupid=0, jobs=1): err= 0: pid=245168: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=67, BW=272KiB/s (278kB/s)(2752KiB/10133msec) 00:33:04.229 slat (nsec): min=10476, max=63784, avg=23485.23, stdev=7461.53 00:33:04.229 clat (msec): min=74, max=396, avg=235.44, stdev=65.13 00:33:04.229 lat (msec): min=74, max=396, avg=235.46, stdev=65.13 00:33:04.229 clat percentiles (msec): 00:33:04.229 | 1.00th=[ 75], 5.00th=[ 133], 10.00th=[ 161], 20.00th=[ 184], 00:33:04.229 | 30.00th=[ 197], 40.00th=[ 215], 50.00th=[ 241], 60.00th=[ 255], 00:33:04.229 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 326], 95.00th=[ 338], 00:33:04.229 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 397], 99.95th=[ 397], 00:33:04.229 | 99.99th=[ 397] 00:33:04.229 bw ( KiB/s): min= 128, max= 512, per=4.46%, avg=268.80, stdev=98.85, samples=20 00:33:04.229 iops : min= 32, max= 128, avg=67.20, stdev=24.71, samples=20 00:33:04.229 lat (msec) : 100=4.65%, 250=47.38%, 500=47.97% 00:33:04.229 cpu : usr=97.71%, sys=1.64%, ctx=41, majf=0, minf=9 00:33:04.229 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:33:04.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.229 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.229 filename1: (groupid=0, jobs=1): err= 0: pid=245169: Tue Dec 10 23:04:10 2024 00:33:04.229 read: IOPS=53, BW=215KiB/s (220kB/s)(2168KiB/10107msec) 00:33:04.229 slat (nsec): min=8233, max=96758, avg=41646.49, stdev=28799.73 00:33:04.229 clat (msec): min=164, max=511, avg=297.66, stdev=62.24 00:33:04.230 lat (msec): min=164, max=511, avg=297.70, stdev=62.22 00:33:04.230 clat percentiles (msec): 00:33:04.230 | 1.00th=[ 176], 5.00th=[ 197], 10.00th=[ 209], 20.00th=[ 249], 00:33:04.230 | 30.00th=[ 266], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 313], 00:33:04.230 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 376], 95.00th=[ 405], 00:33:04.230 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 510], 99.95th=[ 510], 00:33:04.230 | 99.99th=[ 510] 00:33:04.230 bw ( KiB/s): min= 112, max= 368, per=3.49%, avg=210.40, stdev=67.94, samples=20 00:33:04.230 iops : min= 28, max= 92, avg=52.60, stdev=16.98, samples=20 00:33:04.230 lat (msec) : 250=21.03%, 500=78.23%, 750=0.74% 00:33:04.230 cpu : usr=98.29%, sys=1.32%, ctx=49, majf=0, minf=9 00:33:04.230 IO depths : 1=3.5%, 2=9.6%, 4=24.5%, 8=53.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:33:04.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.230 filename1: (groupid=0, jobs=1): err= 0: pid=245170: Tue Dec 10 23:04:10 2024 00:33:04.230 read: IOPS=74, BW=298KiB/s (305kB/s)(3016KiB/10130msec) 00:33:04.230 slat (nsec): min=8206, max=85817, avg=18017.36, stdev=16206.63 00:33:04.230 clat (msec): min=140, max=376, avg=214.49, stdev=34.67 00:33:04.230 lat (msec): min=140, max=376, avg=214.51, stdev=34.67 00:33:04.230 clat percentiles (msec): 00:33:04.230 | 1.00th=[ 144], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 188], 00:33:04.230 | 30.00th=[ 194], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 220], 00:33:04.230 | 70.00th=[ 230], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 257], 00:33:04.230 | 99.00th=[ 338], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:33:04.230 | 99.99th=[ 376] 00:33:04.230 bw ( KiB/s): min= 224, max= 384, per=4.91%, avg=295.20, stdev=49.10, samples=20 00:33:04.230 iops : min= 56, max= 96, avg=73.80, stdev=12.28, samples=20 00:33:04.230 lat (msec) : 250=85.41%, 500=14.59% 00:33:04.230 cpu : usr=98.38%, sys=1.16%, ctx=27, majf=0, minf=9 00:33:04.230 IO depths : 1=1.3%, 2=3.1%, 4=11.3%, 8=73.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:33:04.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 complete : 0=0.0%, 4=90.1%, 8=4.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.230 filename2: (groupid=0, jobs=1): err= 0: pid=245171: Tue Dec 10 23:04:10 2024 00:33:04.230 read: IOPS=75, BW=304KiB/s (311kB/s)(3080KiB/10134msec) 00:33:04.230 slat (usec): min=8, max=103, avg=27.12, stdev=24.30 00:33:04.230 clat (msec): min=121, max=378, avg=210.06, stdev=45.67 00:33:04.230 lat (msec): min=121, max=378, avg=210.09, stdev=45.66 00:33:04.230 clat percentiles (msec): 00:33:04.230 | 1.00th=[ 136], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 176], 00:33:04.230 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 199], 60.00th=[ 211], 00:33:04.230 | 70.00th=[ 226], 80.00th=[ 249], 90.00th=[ 257], 95.00th=[ 292], 00:33:04.230 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:33:04.230 | 99.99th=[ 380] 00:33:04.230 bw ( KiB/s): min= 224, max= 432, per=5.01%, avg=301.60, stdev=59.93, samples=20 00:33:04.230 iops : min= 56, max= 108, avg=75.40, stdev=14.98, samples=20 00:33:04.230 lat (msec) : 250=81.82%, 500=18.18% 00:33:04.230 cpu : usr=98.52%, sys=1.04%, ctx=21, majf=0, minf=9 00:33:04.230 IO depths : 1=0.4%, 2=1.0%, 4=7.7%, 8=78.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:33:04.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 complete : 0=0.0%, 4=89.0%, 8=5.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 issued rwts: total=770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.230 filename2: (groupid=0, jobs=1): err= 0: pid=245172: Tue Dec 10 23:04:10 2024 00:33:04.230 read: IOPS=78, BW=312KiB/s (320kB/s)(3168KiB/10150msec) 00:33:04.230 slat (usec): min=4, max=109, avg=49.58, stdev=27.03 00:33:04.230 clat (msec): min=36, max=364, avg=204.26, stdev=49.61 00:33:04.230 lat (msec): min=36, max=364, avg=204.31, stdev=49.61 00:33:04.230 clat percentiles (msec): 00:33:04.230 | 1.00th=[ 37], 5.00th=[ 122], 10.00th=[ 155], 20.00th=[ 178], 00:33:04.230 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 205], 60.00th=[ 220], 00:33:04.230 | 70.00th=[ 234], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 255], 00:33:04.230 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 363], 00:33:04.230 | 99.99th=[ 363] 00:33:04.230 bw ( KiB/s): min= 224, max= 512, per=5.16%, avg=310.40, stdev=70.68, samples=20 00:33:04.230 iops : min= 56, max= 128, avg=77.60, stdev=17.67, samples=20 00:33:04.230 lat (msec) : 50=2.02%, 100=2.02%, 250=78.66%, 500=17.30% 00:33:04.230 cpu : usr=97.43%, sys=1.83%, ctx=181, majf=0, minf=9 00:33:04.230 IO depths : 1=1.8%, 2=4.2%, 4=13.3%, 8=69.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:33:04.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 complete : 0=0.0%, 4=90.7%, 8=3.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.230 filename2: (groupid=0, jobs=1): err= 0: pid=245173: Tue Dec 10 23:04:10 2024 00:33:04.230 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10110msec) 00:33:04.230 slat (nsec): min=4197, max=75937, avg=23783.84, stdev=13701.90 00:33:04.230 clat (msec): min=141, max=380, avg=252.52, stdev=49.66 00:33:04.230 lat (msec): min=141, max=380, avg=252.54, stdev=49.66 00:33:04.230 clat percentiles (msec): 00:33:04.230 | 1.00th=[ 142], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 205], 00:33:04.230 | 30.00th=[ 220], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 264], 00:33:04.230 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 330], 95.00th=[ 338], 00:33:04.230 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 380], 00:33:04.230 | 99.99th=[ 380] 00:33:04.230 bw ( KiB/s): min= 128, max= 384, per=4.14%, avg=249.60, stdev=48.81, samples=20 00:33:04.230 iops : min= 32, max= 96, avg=62.40, stdev=12.20, samples=20 00:33:04.230 lat (msec) : 250=43.12%, 500=56.88% 00:33:04.230 cpu : usr=98.21%, sys=1.34%, ctx=70, majf=0, minf=9 00:33:04.230 IO depths : 1=2.7%, 2=8.9%, 4=25.0%, 8=53.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:33:04.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.230 filename2: (groupid=0, jobs=1): err= 0: pid=245174: Tue Dec 10 23:04:10 2024 00:33:04.230 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10106msec) 00:33:04.230 slat (usec): min=9, max=117, avg=72.26, stdev=16.17 00:33:04.230 clat (msec): min=152, max=514, avg=303.17, stdev=64.41 00:33:04.230 lat (msec): min=152, max=514, avg=303.24, stdev=64.41 00:33:04.230 clat percentiles (msec): 00:33:04.230 | 1.00th=[ 169], 5.00th=[ 199], 10.00th=[ 224], 20.00th=[ 251], 00:33:04.230 | 30.00th=[ 271], 40.00th=[ 296], 50.00th=[ 305], 60.00th=[ 313], 00:33:04.230 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 393], 95.00th=[ 418], 00:33:04.230 | 99.00th=[ 506], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 514], 00:33:04.230 | 99.99th=[ 514] 00:33:04.230 bw ( KiB/s): min= 128, max= 384, per=3.39%, avg=204.80, stdev=75.33, samples=20 00:33:04.230 iops : min= 32, max= 96, avg=51.20, stdev=18.83, samples=20 00:33:04.230 lat (msec) : 250=20.08%, 500=78.79%, 750=1.14% 00:33:04.230 cpu : usr=98.32%, sys=1.25%, ctx=15, majf=0, minf=9 00:33:04.230 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:33:04.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.230 filename2: (groupid=0, jobs=1): err= 0: pid=245175: Tue Dec 10 23:04:10 2024 00:33:04.230 read: IOPS=73, BW=294KiB/s (301kB/s)(2976KiB/10134msec) 00:33:04.230 slat (nsec): min=8289, max=55797, avg=16912.35, stdev=9701.58 00:33:04.230 clat (msec): min=141, max=377, avg=217.55, stdev=42.14 00:33:04.230 lat (msec): min=141, max=377, avg=217.56, stdev=42.14 00:33:04.230 clat percentiles (msec): 00:33:04.230 | 1.00th=[ 142], 5.00th=[ 153], 10.00th=[ 167], 20.00th=[ 184], 00:33:04.230 | 30.00th=[ 194], 40.00th=[ 205], 50.00th=[ 213], 60.00th=[ 226], 00:33:04.230 | 70.00th=[ 236], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 300], 00:33:04.230 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:33:04.230 | 99.99th=[ 380] 00:33:04.230 bw ( KiB/s): min= 224, max= 384, per=4.84%, avg=291.20, stdev=52.07, samples=20 00:33:04.230 iops : min= 56, max= 96, avg=72.80, stdev=13.02, samples=20 00:33:04.230 lat (msec) : 250=80.91%, 500=19.09% 00:33:04.230 cpu : usr=98.46%, sys=1.15%, ctx=17, majf=0, minf=9 00:33:04.230 IO depths : 1=1.5%, 2=4.3%, 4=14.5%, 8=68.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:33:04.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.230 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.231 filename2: (groupid=0, jobs=1): err= 0: pid=245176: Tue Dec 10 23:04:10 2024 00:33:04.231 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10108msec) 00:33:04.231 slat (nsec): min=8434, max=93001, avg=24535.14, stdev=20889.19 00:33:04.231 clat (msec): min=129, max=478, avg=297.06, stdev=67.63 00:33:04.231 lat (msec): min=129, max=478, avg=297.09, stdev=67.62 00:33:04.231 clat percentiles (msec): 00:33:04.231 | 1.00th=[ 148], 5.00th=[ 182], 10.00th=[ 199], 20.00th=[ 239], 00:33:04.231 | 30.00th=[ 275], 40.00th=[ 292], 50.00th=[ 300], 60.00th=[ 317], 00:33:04.231 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 393], 95.00th=[ 414], 00:33:04.231 | 99.00th=[ 439], 99.50th=[ 468], 99.90th=[ 481], 99.95th=[ 481], 00:33:04.231 | 99.99th=[ 481] 00:33:04.231 bw ( KiB/s): min= 128, max= 368, per=3.51%, avg=211.20, stdev=68.40, samples=20 00:33:04.231 iops : min= 32, max= 92, avg=52.80, stdev=17.10, samples=20 00:33:04.231 lat (msec) : 250=22.79%, 500=77.21% 00:33:04.231 cpu : usr=98.41%, sys=1.18%, ctx=14, majf=0, minf=9 00:33:04.231 IO depths : 1=3.3%, 2=9.4%, 4=24.4%, 8=53.7%, 16=9.2%, 32=0.0%, >=64=0.0% 00:33:04.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.231 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.231 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.231 filename2: (groupid=0, jobs=1): err= 0: pid=245177: Tue Dec 10 23:04:10 2024 00:33:04.231 read: IOPS=67, BW=268KiB/s (275kB/s)(2720KiB/10135msec) 00:33:04.231 slat (nsec): min=5244, max=99986, avg=31518.12, stdev=27346.58 00:33:04.231 clat (msec): min=132, max=393, avg=237.82, stdev=44.07 00:33:04.231 lat (msec): min=132, max=393, avg=237.85, stdev=44.07 00:33:04.231 clat percentiles (msec): 00:33:04.231 | 1.00th=[ 148], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 201], 00:33:04.231 | 30.00th=[ 215], 40.00th=[ 226], 50.00th=[ 241], 60.00th=[ 251], 00:33:04.231 | 70.00th=[ 255], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 313], 00:33:04.231 | 99.00th=[ 380], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:33:04.231 | 99.99th=[ 393] 00:33:04.231 bw ( KiB/s): min= 144, max= 384, per=4.41%, avg=265.60, stdev=50.97, samples=20 00:33:04.231 iops : min= 36, max= 96, avg=66.40, stdev=12.74, samples=20 00:33:04.231 lat (msec) : 250=58.24%, 500=41.76% 00:33:04.231 cpu : usr=98.25%, sys=1.19%, ctx=32, majf=0, minf=9 00:33:04.231 IO depths : 1=1.5%, 2=4.4%, 4=14.9%, 8=68.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:33:04.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.231 complete : 0=0.0%, 4=91.1%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.231 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.231 filename2: (groupid=0, jobs=1): err= 0: pid=245178: Tue Dec 10 23:04:10 2024 00:33:04.231 read: IOPS=60, BW=241KiB/s (246kB/s)(2432KiB/10110msec) 00:33:04.231 slat (usec): min=8, max=106, avg=39.38, stdev=26.73 00:33:04.231 clat (msec): min=173, max=458, avg=265.74, stdev=45.88 00:33:04.231 lat (msec): min=173, max=458, avg=265.78, stdev=45.88 00:33:04.231 clat percentiles (msec): 00:33:04.231 | 1.00th=[ 174], 5.00th=[ 199], 10.00th=[ 209], 20.00th=[ 228], 00:33:04.231 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 271], 00:33:04.231 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 330], 95.00th=[ 347], 00:33:04.231 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 460], 99.95th=[ 460], 00:33:04.231 | 99.99th=[ 460] 00:33:04.231 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=236.80, stdev=71.10, samples=20 00:33:04.231 iops : min= 32, max= 96, avg=59.20, stdev=17.78, samples=20 00:33:04.231 lat (msec) : 250=39.14%, 500=60.86% 00:33:04.231 cpu : usr=98.13%, sys=1.30%, ctx=71, majf=0, minf=9 00:33:04.231 IO depths : 1=2.5%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:33:04.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.231 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.231 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.231 00:33:04.231 Run status group 0 (all jobs): 00:33:04.231 READ: bw=6011KiB/s (6156kB/s), 209KiB/s-312KiB/s (214kB/s-320kB/s), io=59.6MiB (62.5MB), run=10106-10150msec 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 bdev_null0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 [2024-12-10 23:04:10.939837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.231 bdev_null1 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.231 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:04.232 { 00:33:04.232 "params": { 00:33:04.232 "name": "Nvme$subsystem", 00:33:04.232 "trtype": "$TEST_TRANSPORT", 00:33:04.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:04.232 "adrfam": "ipv4", 00:33:04.232 "trsvcid": "$NVMF_PORT", 00:33:04.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:04.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:04.232 "hdgst": ${hdgst:-false}, 00:33:04.232 "ddgst": ${ddgst:-false} 00:33:04.232 }, 00:33:04.232 "method": "bdev_nvme_attach_controller" 00:33:04.232 } 00:33:04.232 EOF 00:33:04.232 )") 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:04.232 { 00:33:04.232 "params": { 00:33:04.232 "name": "Nvme$subsystem", 00:33:04.232 "trtype": "$TEST_TRANSPORT", 00:33:04.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:04.232 "adrfam": "ipv4", 00:33:04.232 "trsvcid": "$NVMF_PORT", 00:33:04.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:04.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:04.232 "hdgst": ${hdgst:-false}, 00:33:04.232 "ddgst": ${ddgst:-false} 00:33:04.232 }, 00:33:04.232 "method": "bdev_nvme_attach_controller" 00:33:04.232 } 00:33:04.232 EOF 00:33:04.232 )") 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:04.232 "params": { 00:33:04.232 "name": "Nvme0", 00:33:04.232 "trtype": "tcp", 00:33:04.232 "traddr": "10.0.0.2", 00:33:04.232 "adrfam": "ipv4", 00:33:04.232 "trsvcid": "4420", 00:33:04.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:04.232 "hdgst": false, 00:33:04.232 "ddgst": false 00:33:04.232 }, 00:33:04.232 "method": "bdev_nvme_attach_controller" 00:33:04.232 },{ 00:33:04.232 "params": { 00:33:04.232 "name": "Nvme1", 00:33:04.232 "trtype": "tcp", 00:33:04.232 "traddr": "10.0.0.2", 00:33:04.232 "adrfam": "ipv4", 00:33:04.232 "trsvcid": "4420", 00:33:04.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:04.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:04.232 "hdgst": false, 00:33:04.232 "ddgst": false 00:33:04.232 }, 00:33:04.232 "method": "bdev_nvme_attach_controller" 00:33:04.232 }' 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:04.232 23:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:04.232 23:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:04.232 23:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:04.232 23:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:04.232 23:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:04.232 23:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:04.232 23:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:04.232 23:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:04.232 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:04.232 ... 00:33:04.232 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:04.232 ... 00:33:04.232 fio-3.35 00:33:04.232 Starting 4 threads 00:33:09.499 00:33:09.499 filename0: (groupid=0, jobs=1): err= 0: pid=246562: Tue Dec 10 23:04:17 2024 00:33:09.499 read: IOPS=1900, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5001msec) 00:33:09.499 slat (nsec): min=4431, max=54739, avg=14215.51, stdev=5630.91 00:33:09.499 clat (usec): min=824, max=7987, avg=4161.37, stdev=617.56 00:33:09.499 lat (usec): min=837, max=8011, avg=4175.59, stdev=618.11 00:33:09.499 clat percentiles (usec): 00:33:09.499 | 1.00th=[ 2212], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3851], 00:33:09.499 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:09.499 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4948], 00:33:09.499 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 7832], 00:33:09.499 | 99.99th=[ 7963] 00:33:09.499 bw ( KiB/s): min=14720, max=15920, per=25.42%, avg=15306.67, stdev=352.00, samples=9 00:33:09.499 iops : min= 1840, max= 1990, avg=1913.33, stdev=44.00, samples=9 00:33:09.499 lat (usec) : 1000=0.05% 00:33:09.499 lat (msec) : 2=0.67%, 4=26.07%, 10=73.21% 00:33:09.499 cpu : usr=95.06%, sys=4.48%, ctx=13, majf=0, minf=54 00:33:09.499 IO depths : 1=0.4%, 2=14.5%, 4=57.5%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.499 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.499 issued rwts: total=9502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.499 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.499 filename0: (groupid=0, jobs=1): err= 0: pid=246563: Tue Dec 10 23:04:17 2024 00:33:09.499 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5003msec) 00:33:09.499 slat (nsec): min=4012, max=58846, avg=14953.76, stdev=6292.06 00:33:09.499 clat (usec): min=846, max=7761, avg=4266.45, stdev=687.53 00:33:09.499 lat (usec): min=859, max=7782, avg=4281.40, stdev=687.60 00:33:09.499 clat percentiles (usec): 00:33:09.499 | 1.00th=[ 2606], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3982], 00:33:09.499 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:09.499 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5669], 00:33:09.499 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7570], 99.95th=[ 7570], 00:33:09.499 | 99.99th=[ 7767] 00:33:09.499 bw ( KiB/s): min=14352, max=15216, per=24.59%, avg=14809.30, stdev=286.20, samples=10 00:33:09.499 iops : min= 1794, max= 1902, avg=1851.10, stdev=35.75, samples=10 00:33:09.499 lat (usec) : 1000=0.14% 00:33:09.499 lat (msec) : 2=0.49%, 4=20.62%, 10=78.75% 00:33:09.499 cpu : usr=94.24%, sys=4.94%, ctx=168, majf=0, minf=73 00:33:09.499 IO depths : 1=0.1%, 2=16.4%, 4=56.0%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.499 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.499 issued rwts: total=9262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.499 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.499 filename1: (groupid=0, jobs=1): err= 0: pid=246564: Tue Dec 10 23:04:17 2024 00:33:09.499 read: IOPS=1861, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5002msec) 00:33:09.499 slat (nsec): min=4720, max=62513, avg=14684.53, stdev=5962.08 00:33:09.499 clat (usec): min=782, max=10296, avg=4244.66, stdev=698.57 00:33:09.499 lat (usec): min=796, max=10307, avg=4259.34, stdev=698.63 00:33:09.499 clat percentiles (usec): 00:33:09.499 | 1.00th=[ 2245], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3949], 00:33:09.499 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:09.499 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5473], 00:33:09.499 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 7635], 99.95th=[ 7701], 00:33:09.499 | 99.99th=[10290] 00:33:09.499 bw ( KiB/s): min=14480, max=15440, per=24.73%, avg=14896.00, stdev=355.78, samples=10 00:33:09.499 iops : min= 1810, max= 1930, avg=1862.00, stdev=44.47, samples=10 00:33:09.499 lat (usec) : 1000=0.06% 00:33:09.499 lat (msec) : 2=0.67%, 4=22.01%, 10=77.25%, 20=0.01% 00:33:09.500 cpu : usr=94.34%, sys=5.14%, ctx=11, majf=0, minf=61 00:33:09.500 IO depths : 1=0.2%, 2=15.3%, 4=56.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.500 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.500 issued rwts: total=9311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.500 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.500 filename1: (groupid=0, jobs=1): err= 0: pid=246565: Tue Dec 10 23:04:17 2024 00:33:09.500 read: IOPS=1916, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5003msec) 00:33:09.500 slat (nsec): min=4503, max=60617, avg=15195.72, stdev=6186.04 00:33:09.500 clat (usec): min=795, max=7901, avg=4116.80, stdev=533.58 00:33:09.500 lat (usec): min=808, max=7923, avg=4131.99, stdev=534.33 00:33:09.500 clat percentiles (usec): 00:33:09.500 | 1.00th=[ 2442], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3818], 00:33:09.500 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:33:09.500 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4752], 00:33:09.500 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7701], 00:33:09.500 | 99.99th=[ 7898] 00:33:09.500 bw ( KiB/s): min=14464, max=15792, per=25.45%, avg=15329.40, stdev=424.10, samples=10 00:33:09.500 iops : min= 1808, max= 1974, avg=1916.10, stdev=52.99, samples=10 00:33:09.500 lat (usec) : 1000=0.10% 00:33:09.500 lat (msec) : 2=0.54%, 4=27.36%, 10=71.99% 00:33:09.500 cpu : usr=89.80%, sys=6.98%, ctx=344, majf=0, minf=112 00:33:09.500 IO depths : 1=0.4%, 2=18.8%, 4=54.9%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.500 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.500 issued rwts: total=9587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.500 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.500 00:33:09.500 Run status group 0 (all jobs): 00:33:09.500 READ: bw=58.8MiB/s (61.7MB/s), 14.5MiB/s-15.0MiB/s (15.2MB/s-15.7MB/s), io=294MiB (309MB), run=5001-5003msec 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.759 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 00:33:09.760 real 0m24.469s 00:33:09.760 user 4m36.343s 00:33:09.760 sys 0m6.079s 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 ************************************ 00:33:09.760 END TEST fio_dif_rand_params 00:33:09.760 ************************************ 00:33:09.760 23:04:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:09.760 23:04:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:09.760 23:04:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 ************************************ 00:33:09.760 START TEST fio_dif_digest 00:33:09.760 ************************************ 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 bdev_null0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.760 [2024-12-10 23:04:17.412798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.760 { 00:33:09.760 "params": { 00:33:09.760 "name": "Nvme$subsystem", 00:33:09.760 "trtype": "$TEST_TRANSPORT", 00:33:09.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.760 "adrfam": "ipv4", 00:33:09.760 "trsvcid": "$NVMF_PORT", 00:33:09.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.760 "hdgst": ${hdgst:-false}, 00:33:09.760 "ddgst": ${ddgst:-false} 00:33:09.760 }, 00:33:09.760 "method": "bdev_nvme_attach_controller" 00:33:09.760 } 00:33:09.760 EOF 00:33:09.760 )") 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:09.760 "params": { 00:33:09.760 "name": "Nvme0", 00:33:09.760 "trtype": "tcp", 00:33:09.760 "traddr": "10.0.0.2", 00:33:09.760 "adrfam": "ipv4", 00:33:09.760 "trsvcid": "4420", 00:33:09.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.760 "hdgst": true, 00:33:09.760 "ddgst": true 00:33:09.760 }, 00:33:09.760 "method": "bdev_nvme_attach_controller" 00:33:09.760 }' 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:09.760 23:04:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.019 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:10.019 ... 00:33:10.019 fio-3.35 00:33:10.019 Starting 3 threads 00:33:22.263 00:33:22.263 filename0: (groupid=0, jobs=1): err= 0: pid=247439: Tue Dec 10 23:04:28 2024 00:33:22.263 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10006msec) 00:33:22.263 slat (nsec): min=5811, max=47697, avg=14295.05, stdev=3312.26 00:33:22.263 clat (usec): min=8523, max=19783, avg=14275.76, stdev=1122.77 00:33:22.263 lat (usec): min=8536, max=19809, avg=14290.06, stdev=1122.90 00:33:22.263 clat percentiles (usec): 00:33:22.263 | 1.00th=[10028], 5.00th=[12518], 10.00th=[13042], 20.00th=[13566], 00:33:22.263 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:33:22.263 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15795], 00:33:22.263 | 99.00th=[16450], 99.50th=[16909], 99.90th=[19792], 99.95th=[19792], 00:33:22.263 | 99.99th=[19792] 00:33:22.263 bw ( KiB/s): min=26112, max=27904, per=34.78%, avg=26841.60, stdev=507.09, samples=20 00:33:22.263 iops : min= 204, max= 218, avg=209.70, stdev= 3.96, samples=20 00:33:22.263 lat (msec) : 10=0.90%, 20=99.10% 00:33:22.263 cpu : usr=92.63%, sys=6.81%, ctx=22, majf=0, minf=150 00:33:22.263 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.263 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.263 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.263 filename0: (groupid=0, jobs=1): err= 0: pid=247440: Tue Dec 10 23:04:28 2024 00:33:22.263 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10047msec) 00:33:22.263 slat (nsec): min=7350, max=43482, avg=14350.93, stdev=3360.44 00:33:22.263 clat (usec): min=8063, max=54832, avg=14779.20, stdev=1778.00 00:33:22.263 lat (usec): min=8076, max=54845, avg=14793.55, stdev=1778.05 00:33:22.263 clat percentiles (usec): 00:33:22.263 | 1.00th=[ 9896], 5.00th=[12649], 10.00th=[13304], 20.00th=[13960], 00:33:22.263 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:33:22.263 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:33:22.263 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21627], 99.95th=[49546], 00:33:22.263 | 99.99th=[54789] 00:33:22.263 bw ( KiB/s): min=25344, max=27392, per=33.70%, avg=26009.60, stdev=534.41, samples=20 00:33:22.263 iops : min= 198, max= 214, avg=203.20, stdev= 4.18, samples=20 00:33:22.263 lat (msec) : 10=1.23%, 20=98.53%, 50=0.20%, 100=0.05% 00:33:22.263 cpu : usr=92.41%, sys=7.06%, ctx=12, majf=0, minf=134 00:33:22.263 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.263 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.263 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.263 filename0: (groupid=0, jobs=1): err= 0: pid=247441: Tue Dec 10 23:04:28 2024 00:33:22.263 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(240MiB/10046msec) 00:33:22.263 slat (nsec): min=7754, max=38280, avg=14297.17, stdev=3350.80 00:33:22.263 clat (usec): min=12276, max=58864, avg=15633.07, stdev=4195.21 00:33:22.263 lat (usec): min=12289, max=58879, avg=15647.37, stdev=4195.18 00:33:22.264 clat percentiles (usec): 00:33:22.264 | 1.00th=[13042], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:33:22.264 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:33:22.264 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:33:22.264 | 99.00th=[45876], 99.50th=[55837], 99.90th=[57934], 99.95th=[58983], 00:33:22.264 | 99.99th=[58983] 00:33:22.264 bw ( KiB/s): min=22528, max=25600, per=31.86%, avg=24588.80, stdev=1068.43, samples=20 00:33:22.264 iops : min= 176, max= 200, avg=192.10, stdev= 8.35, samples=20 00:33:22.264 lat (msec) : 20=98.91%, 50=0.10%, 100=0.99% 00:33:22.264 cpu : usr=92.74%, sys=6.73%, ctx=17, majf=0, minf=126 00:33:22.264 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.264 issued rwts: total=1923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.264 00:33:22.264 Run status group 0 (all jobs): 00:33:22.264 READ: bw=75.4MiB/s (79.0MB/s), 23.9MiB/s-26.2MiB/s (25.1MB/s-27.5MB/s), io=757MiB (794MB), run=10006-10047msec 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.264 00:33:22.264 real 0m11.311s 00:33:22.264 user 0m29.198s 00:33:22.264 sys 0m2.373s 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.264 23:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.264 ************************************ 00:33:22.264 END TEST fio_dif_digest 00:33:22.264 ************************************ 00:33:22.264 23:04:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:22.264 23:04:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.264 rmmod nvme_tcp 00:33:22.264 rmmod nvme_fabrics 00:33:22.264 rmmod nvme_keyring 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 241253 ']' 00:33:22.264 23:04:28 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 241253 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 241253 ']' 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 241253 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241253 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241253' 00:33:22.264 killing process with pid 241253 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@973 -- # kill 241253 00:33:22.264 23:04:28 nvmf_dif -- common/autotest_common.sh@978 -- # wait 241253 00:33:22.264 23:04:29 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:22.264 23:04:29 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:22.522 Waiting for block devices as requested 00:33:22.522 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:22.782 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:22.782 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:23.042 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:23.042 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:23.042 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:23.302 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:23.302 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:23.302 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:23.302 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:23.561 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:23.561 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:23.561 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:23.561 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:23.820 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:23.820 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:23.820 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:24.078 23:04:31 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.078 23:04:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.078 23:04:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.983 23:04:33 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:25.983 00:33:25.983 real 1m7.550s 00:33:25.983 user 6m34.196s 00:33:25.983 sys 0m17.357s 00:33:25.984 23:04:33 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.984 23:04:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:25.984 ************************************ 00:33:25.984 END TEST nvmf_dif 00:33:25.984 ************************************ 00:33:25.984 23:04:33 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:25.984 23:04:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:25.984 23:04:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.984 23:04:33 -- common/autotest_common.sh@10 -- # set +x 00:33:25.984 ************************************ 00:33:25.984 START TEST nvmf_abort_qd_sizes 00:33:25.984 ************************************ 00:33:25.984 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:26.242 * Looking for test storage... 00:33:26.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:26.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.242 --rc genhtml_branch_coverage=1 00:33:26.242 --rc genhtml_function_coverage=1 00:33:26.242 --rc genhtml_legend=1 00:33:26.242 --rc geninfo_all_blocks=1 00:33:26.242 --rc geninfo_unexecuted_blocks=1 00:33:26.242 00:33:26.242 ' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:26.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.242 --rc genhtml_branch_coverage=1 00:33:26.242 --rc genhtml_function_coverage=1 00:33:26.242 --rc genhtml_legend=1 00:33:26.242 --rc geninfo_all_blocks=1 00:33:26.242 --rc geninfo_unexecuted_blocks=1 00:33:26.242 00:33:26.242 ' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:26.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.242 --rc genhtml_branch_coverage=1 00:33:26.242 --rc genhtml_function_coverage=1 00:33:26.242 --rc genhtml_legend=1 00:33:26.242 --rc geninfo_all_blocks=1 00:33:26.242 --rc geninfo_unexecuted_blocks=1 00:33:26.242 00:33:26.242 ' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:26.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.242 --rc genhtml_branch_coverage=1 00:33:26.242 --rc genhtml_function_coverage=1 00:33:26.242 --rc genhtml_legend=1 00:33:26.242 --rc geninfo_all_blocks=1 00:33:26.242 --rc geninfo_unexecuted_blocks=1 00:33:26.242 00:33:26.242 ' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:26.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:26.242 23:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:26.243 23:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.145 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:28.404 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:28.404 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:28.404 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:28.404 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:28.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:33:28.404 00:33:28.404 --- 10.0.0.2 ping statistics --- 00:33:28.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.404 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:33:28.404 00:33:28.404 --- 10.0.0.1 ping statistics --- 00:33:28.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.404 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:28.404 23:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:29.781 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.781 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.781 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.781 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.781 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.781 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.781 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.781 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.781 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:30.712 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:30.712 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.712 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:30.712 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:30.712 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.712 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:30.713 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=252359 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 252359 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 252359 ']' 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.972 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:30.972 [2024-12-10 23:04:38.498470] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:30.972 [2024-12-10 23:04:38.498571] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.972 [2024-12-10 23:04:38.568974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.972 [2024-12-10 23:04:38.624908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.972 [2024-12-10 23:04:38.624964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.972 [2024-12-10 23:04:38.624991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.972 [2024-12-10 23:04:38.625002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.972 [2024-12-10 23:04:38.625011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.972 [2024-12-10 23:04:38.626403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.972 [2024-12-10 23:04:38.626509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.972 [2024-12-10 23:04:38.626599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:30.972 [2024-12-10 23:04:38.626604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.232 23:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:31.232 ************************************ 00:33:31.232 START TEST spdk_target_abort 00:33:31.232 ************************************ 00:33:31.233 23:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:31.233 23:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:31.233 23:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:33:31.233 23:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.233 23:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:34.520 spdk_targetn1 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:34.520 [2024-12-10 23:04:41.654029] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:34.520 [2024-12-10 23:04:41.702344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:34.520 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:34.521 23:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:37.809 Initializing NVMe Controllers 00:33:37.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:37.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:37.809 Initialization complete. Launching workers. 00:33:37.809 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12486, failed: 0 00:33:37.809 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1319, failed to submit 11167 00:33:37.809 success 702, unsuccessful 617, failed 0 00:33:37.810 23:04:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:37.810 23:04:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:41.096 Initializing NVMe Controllers 00:33:41.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:41.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:41.096 Initialization complete. Launching workers. 00:33:41.096 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8657, failed: 0 00:33:41.096 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7432 00:33:41.096 success 323, unsuccessful 902, failed 0 00:33:41.096 23:04:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:41.096 23:04:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:44.390 Initializing NVMe Controllers 00:33:44.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:44.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:44.390 Initialization complete. Launching workers. 00:33:44.390 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31282, failed: 0 00:33:44.390 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2664, failed to submit 28618 00:33:44.390 success 539, unsuccessful 2125, failed 0 00:33:44.390 23:04:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:44.390 23:04:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.390 23:04:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:44.390 23:04:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.390 23:04:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:44.390 23:04:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.390 23:04:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 252359 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 252359 ']' 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 252359 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252359 00:33:45.323 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.324 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.324 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252359' 00:33:45.324 killing process with pid 252359 00:33:45.324 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 252359 00:33:45.324 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 252359 00:33:45.324 00:33:45.324 real 0m14.174s 00:33:45.324 user 0m53.458s 00:33:45.324 sys 0m2.737s 00:33:45.324 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.324 23:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:45.324 ************************************ 00:33:45.324 END TEST spdk_target_abort 00:33:45.324 ************************************ 00:33:45.324 23:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:45.324 23:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:45.324 23:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.324 23:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:45.324 ************************************ 00:33:45.324 START TEST kernel_target_abort 00:33:45.324 ************************************ 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:45.324 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:45.583 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:45.583 23:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:46.521 Waiting for block devices as requested 00:33:46.779 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:46.779 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:46.779 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:47.036 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:47.036 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:47.036 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:47.293 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:47.293 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:47.293 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:47.293 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:47.558 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:47.558 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:47.558 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:47.558 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:47.817 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:47.817 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:47.817 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:48.075 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:48.076 No valid GPT data, bailing 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:48.076 00:33:48.076 Discovery Log Number of Records 2, Generation counter 2 00:33:48.076 =====Discovery Log Entry 0====== 00:33:48.076 trtype: tcp 00:33:48.076 adrfam: ipv4 00:33:48.076 subtype: current discovery subsystem 00:33:48.076 treq: not specified, sq flow control disable supported 00:33:48.076 portid: 1 00:33:48.076 trsvcid: 4420 00:33:48.076 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:48.076 traddr: 10.0.0.1 00:33:48.076 eflags: none 00:33:48.076 sectype: none 00:33:48.076 =====Discovery Log Entry 1====== 00:33:48.076 trtype: tcp 00:33:48.076 adrfam: ipv4 00:33:48.076 subtype: nvme subsystem 00:33:48.076 treq: not specified, sq flow control disable supported 00:33:48.076 portid: 1 00:33:48.076 trsvcid: 4420 00:33:48.076 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:48.076 traddr: 10.0.0.1 00:33:48.076 eflags: none 00:33:48.076 sectype: none 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:48.076 23:04:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:51.361 Initializing NVMe Controllers 00:33:51.361 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:51.361 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:51.361 Initialization complete. Launching workers. 00:33:51.361 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56750, failed: 0 00:33:51.361 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56750, failed to submit 0 00:33:51.361 success 0, unsuccessful 56750, failed 0 00:33:51.361 23:04:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:51.361 23:04:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:54.652 Initializing NVMe Controllers 00:33:54.652 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:54.652 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:54.652 Initialization complete. Launching workers. 00:33:54.652 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100355, failed: 0 00:33:54.652 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25266, failed to submit 75089 00:33:54.652 success 0, unsuccessful 25266, failed 0 00:33:54.652 23:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:54.652 23:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:57.971 Initializing NVMe Controllers 00:33:57.971 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:57.971 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:57.971 Initialization complete. Launching workers. 00:33:57.971 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96207, failed: 0 00:33:57.971 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24070, failed to submit 72137 00:33:57.971 success 0, unsuccessful 24070, failed 0 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:57.971 23:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:58.908 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:58.908 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:58.908 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:58.908 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:58.908 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:58.908 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:58.908 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:58.908 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:58.908 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:59.846 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:00.104 00:34:00.104 real 0m14.576s 00:34:00.104 user 0m6.706s 00:34:00.104 sys 0m3.341s 00:34:00.104 23:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.104 23:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:00.104 ************************************ 00:34:00.104 END TEST kernel_target_abort 00:34:00.104 ************************************ 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.104 rmmod nvme_tcp 00:34:00.104 rmmod nvme_fabrics 00:34:00.104 rmmod nvme_keyring 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 252359 ']' 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 252359 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 252359 ']' 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 252359 00:34:00.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (252359) - No such process 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 252359 is not found' 00:34:00.104 Process with pid 252359 is not found 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:00.104 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:01.476 Waiting for block devices as requested 00:34:01.476 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:01.476 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:01.476 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:01.476 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:01.734 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:01.734 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:01.734 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:01.734 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:01.993 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:01.993 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:01.993 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:01.993 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:02.253 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:02.253 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:02.253 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:02.512 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:02.512 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:02.512 23:05:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.051 23:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.051 00:34:05.051 real 0m38.564s 00:34:05.051 user 1m2.407s 00:34:05.051 sys 0m9.711s 00:34:05.051 23:05:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.051 23:05:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:05.051 ************************************ 00:34:05.051 END TEST nvmf_abort_qd_sizes 00:34:05.051 ************************************ 00:34:05.052 23:05:12 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:05.052 23:05:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:05.052 23:05:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:05.052 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 ************************************ 00:34:05.052 START TEST keyring_file 00:34:05.052 ************************************ 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:05.052 * Looking for test storage... 00:34:05.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.052 --rc genhtml_branch_coverage=1 00:34:05.052 --rc genhtml_function_coverage=1 00:34:05.052 --rc genhtml_legend=1 00:34:05.052 --rc geninfo_all_blocks=1 00:34:05.052 --rc geninfo_unexecuted_blocks=1 00:34:05.052 00:34:05.052 ' 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.052 --rc genhtml_branch_coverage=1 00:34:05.052 --rc genhtml_function_coverage=1 00:34:05.052 --rc genhtml_legend=1 00:34:05.052 --rc geninfo_all_blocks=1 00:34:05.052 --rc geninfo_unexecuted_blocks=1 00:34:05.052 00:34:05.052 ' 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.052 --rc genhtml_branch_coverage=1 00:34:05.052 --rc genhtml_function_coverage=1 00:34:05.052 --rc genhtml_legend=1 00:34:05.052 --rc geninfo_all_blocks=1 00:34:05.052 --rc geninfo_unexecuted_blocks=1 00:34:05.052 00:34:05.052 ' 00:34:05.052 23:05:12 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.052 --rc genhtml_branch_coverage=1 00:34:05.052 --rc genhtml_function_coverage=1 00:34:05.052 --rc genhtml_legend=1 00:34:05.052 --rc geninfo_all_blocks=1 00:34:05.052 --rc geninfo_unexecuted_blocks=1 00:34:05.052 00:34:05.052 ' 00:34:05.052 23:05:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.052 23:05:12 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.052 23:05:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.052 23:05:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.052 23:05:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.052 23:05:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:05.052 23:05:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:05.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.052 23:05:12 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:05.052 23:05:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:05.052 23:05:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:05.052 23:05:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:05.052 23:05:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:05.052 23:05:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:05.052 23:05:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mJOkRnZ66x 00:34:05.052 23:05:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mJOkRnZ66x 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mJOkRnZ66x 00:34:05.053 23:05:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mJOkRnZ66x 00:34:05.053 23:05:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yQrYY1YBDA 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:05.053 23:05:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yQrYY1YBDA 00:34:05.053 23:05:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yQrYY1YBDA 00:34:05.053 23:05:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yQrYY1YBDA 00:34:05.053 23:05:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=258124 00:34:05.053 23:05:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:05.053 23:05:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 258124 00:34:05.053 23:05:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 258124 ']' 00:34:05.053 23:05:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.053 23:05:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.053 23:05:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.053 23:05:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.053 23:05:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:05.053 [2024-12-10 23:05:12.641713] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:34:05.053 [2024-12-10 23:05:12.641797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258124 ] 00:34:05.053 [2024-12-10 23:05:12.706492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.053 [2024-12-10 23:05:12.764556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.311 23:05:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.311 23:05:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:05.311 23:05:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:05.311 23:05:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.311 23:05:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:05.311 [2024-12-10 23:05:13.012456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.311 null0 00:34:05.569 [2024-12-10 23:05:13.044515] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:05.569 [2024-12-10 23:05:13.045061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:05.569 23:05:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.569 23:05:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:05.569 23:05:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:05.570 [2024-12-10 23:05:13.068580] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:05.570 request: 00:34:05.570 { 00:34:05.570 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:05.570 "secure_channel": false, 00:34:05.570 "listen_address": { 00:34:05.570 "trtype": "tcp", 00:34:05.570 "traddr": "127.0.0.1", 00:34:05.570 "trsvcid": "4420" 00:34:05.570 }, 00:34:05.570 "method": "nvmf_subsystem_add_listener", 00:34:05.570 "req_id": 1 00:34:05.570 } 00:34:05.570 Got JSON-RPC error response 00:34:05.570 response: 00:34:05.570 { 00:34:05.570 "code": -32602, 00:34:05.570 "message": "Invalid parameters" 00:34:05.570 } 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:05.570 23:05:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=258138 00:34:05.570 23:05:13 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:05.570 23:05:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 258138 /var/tmp/bperf.sock 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 258138 ']' 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:05.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.570 23:05:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:05.570 [2024-12-10 23:05:13.115984] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:34:05.570 [2024-12-10 23:05:13.116045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258138 ] 00:34:05.570 [2024-12-10 23:05:13.181439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.570 [2024-12-10 23:05:13.245097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.828 23:05:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.828 23:05:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:05.828 23:05:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:05.828 23:05:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:06.085 23:05:13 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yQrYY1YBDA 00:34:06.085 23:05:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yQrYY1YBDA 00:34:06.342 23:05:13 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:06.342 23:05:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:06.342 23:05:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.342 23:05:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.342 23:05:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:06.600 23:05:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mJOkRnZ66x == \/\t\m\p\/\t\m\p\.\m\J\O\k\R\n\Z\6\6\x ]] 00:34:06.600 23:05:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:06.600 23:05:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:06.600 23:05:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.600 23:05:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:06.600 23:05:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.857 23:05:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.yQrYY1YBDA == \/\t\m\p\/\t\m\p\.\y\Q\r\Y\Y\1\Y\B\D\A ]] 00:34:06.857 23:05:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:06.857 23:05:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:06.857 23:05:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:06.857 23:05:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.857 23:05:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:06.857 23:05:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.115 23:05:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:07.115 23:05:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:07.115 23:05:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:07.115 23:05:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:07.115 23:05:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:07.115 23:05:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:07.115 23:05:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.372 23:05:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:07.372 23:05:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:07.372 23:05:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:07.630 [2024-12-10 23:05:15.254863] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:07.630 nvme0n1 00:34:07.630 23:05:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:07.630 23:05:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:07.630 23:05:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:07.630 23:05:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:07.630 23:05:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.630 23:05:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:08.195 23:05:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:08.195 23:05:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:08.195 23:05:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:08.195 23:05:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:08.195 23:05:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.195 23:05:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.195 23:05:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:08.195 23:05:15 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:08.195 23:05:15 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:08.453 Running I/O for 1 seconds... 00:34:09.390 10374.00 IOPS, 40.52 MiB/s 00:34:09.390 Latency(us) 00:34:09.390 [2024-12-10T22:05:17.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.390 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:09.390 nvme0n1 : 1.01 10421.15 40.71 0.00 0.00 12243.37 5145.79 19709.35 00:34:09.390 [2024-12-10T22:05:17.122Z] =================================================================================================================== 00:34:09.390 [2024-12-10T22:05:17.122Z] Total : 10421.15 40.71 0.00 0.00 12243.37 5145.79 19709.35 00:34:09.390 { 00:34:09.390 "results": [ 00:34:09.390 { 00:34:09.390 "job": "nvme0n1", 00:34:09.390 "core_mask": "0x2", 00:34:09.390 "workload": "randrw", 00:34:09.390 "percentage": 50, 00:34:09.390 "status": "finished", 00:34:09.390 "queue_depth": 128, 00:34:09.390 "io_size": 4096, 00:34:09.390 "runtime": 1.007854, 00:34:09.390 "iops": 10421.152270070863, 00:34:09.390 "mibps": 40.70762605496431, 00:34:09.390 "io_failed": 0, 00:34:09.390 "io_timeout": 0, 00:34:09.390 "avg_latency_us": 12243.371407816461, 00:34:09.390 "min_latency_us": 5145.789629629629, 00:34:09.390 "max_latency_us": 19709.345185185186 00:34:09.390 } 00:34:09.390 ], 00:34:09.390 "core_count": 1 00:34:09.390 } 00:34:09.390 23:05:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:09.390 23:05:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:09.648 23:05:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:09.648 23:05:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:09.648 23:05:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:09.648 23:05:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.648 23:05:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.648 23:05:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.905 23:05:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:09.905 23:05:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:09.905 23:05:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:09.905 23:05:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:09.905 23:05:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.905 23:05:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.905 23:05:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:10.163 23:05:17 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:10.163 23:05:17 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:10.163 23:05:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:10.163 23:05:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:10.163 23:05:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:10.163 23:05:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:10.163 23:05:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:10.163 23:05:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:10.163 23:05:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:10.163 23:05:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:10.421 [2024-12-10 23:05:18.132436] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:10.421 [2024-12-10 23:05:18.133330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e3e90 (107): Transport endpoint is not connected 00:34:10.421 [2024-12-10 23:05:18.134322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e3e90 (9): Bad file descriptor 00:34:10.421 [2024-12-10 23:05:18.135321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:10.421 [2024-12-10 23:05:18.135339] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:10.421 [2024-12-10 23:05:18.135368] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:10.421 [2024-12-10 23:05:18.135384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:10.421 request: 00:34:10.421 { 00:34:10.421 "name": "nvme0", 00:34:10.421 "trtype": "tcp", 00:34:10.421 "traddr": "127.0.0.1", 00:34:10.421 "adrfam": "ipv4", 00:34:10.421 "trsvcid": "4420", 00:34:10.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.421 "prchk_reftag": false, 00:34:10.421 "prchk_guard": false, 00:34:10.421 "hdgst": false, 00:34:10.421 "ddgst": false, 00:34:10.421 "psk": "key1", 00:34:10.421 "allow_unrecognized_csi": false, 00:34:10.421 "method": "bdev_nvme_attach_controller", 00:34:10.421 "req_id": 1 00:34:10.421 } 00:34:10.421 Got JSON-RPC error response 00:34:10.421 response: 00:34:10.421 { 00:34:10.421 "code": -5, 00:34:10.421 "message": "Input/output error" 00:34:10.421 } 00:34:10.680 23:05:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:10.680 23:05:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:10.680 23:05:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:10.680 23:05:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:10.680 23:05:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:10.680 23:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:10.680 23:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:10.680 23:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:10.680 23:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:10.680 23:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:10.938 23:05:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:10.938 23:05:18 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:10.938 23:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:10.938 23:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:10.938 23:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:10.938 23:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:10.938 23:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.196 23:05:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:11.196 23:05:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:11.196 23:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:11.454 23:05:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:11.454 23:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:11.712 23:05:19 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:11.712 23:05:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.712 23:05:19 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:11.970 23:05:19 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:11.970 23:05:19 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.mJOkRnZ66x 00:34:11.970 23:05:19 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:11.970 23:05:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:11.970 23:05:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:11.970 23:05:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:11.970 23:05:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.970 23:05:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:11.970 23:05:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.970 23:05:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:11.970 23:05:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:12.228 [2024-12-10 23:05:19.774308] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mJOkRnZ66x': 0100660 00:34:12.228 [2024-12-10 23:05:19.774339] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:12.228 request: 00:34:12.228 { 00:34:12.228 "name": "key0", 00:34:12.228 "path": "/tmp/tmp.mJOkRnZ66x", 00:34:12.228 "method": "keyring_file_add_key", 00:34:12.228 "req_id": 1 00:34:12.228 } 00:34:12.228 Got JSON-RPC error response 00:34:12.228 response: 00:34:12.228 { 00:34:12.228 "code": -1, 00:34:12.228 "message": "Operation not permitted" 00:34:12.228 } 00:34:12.228 23:05:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:12.228 23:05:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:12.228 23:05:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:12.228 23:05:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:12.228 23:05:19 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.mJOkRnZ66x 00:34:12.228 23:05:19 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:12.228 23:05:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mJOkRnZ66x 00:34:12.485 23:05:20 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.mJOkRnZ66x 00:34:12.485 23:05:20 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:12.485 23:05:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:12.485 23:05:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:12.485 23:05:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:12.485 23:05:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:12.485 23:05:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:12.743 23:05:20 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:12.743 23:05:20 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:12.743 23:05:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:12.743 23:05:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:12.743 23:05:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:12.743 23:05:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:12.743 23:05:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:12.743 23:05:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:12.743 23:05:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:12.743 23:05:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:13.001 [2024-12-10 23:05:20.612609] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mJOkRnZ66x': No such file or directory 00:34:13.001 [2024-12-10 23:05:20.612642] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:13.001 [2024-12-10 23:05:20.612680] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:13.001 [2024-12-10 23:05:20.612693] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:13.001 [2024-12-10 23:05:20.612706] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:13.001 [2024-12-10 23:05:20.612717] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:13.001 request: 00:34:13.001 { 00:34:13.001 "name": "nvme0", 00:34:13.001 "trtype": "tcp", 00:34:13.001 "traddr": "127.0.0.1", 00:34:13.001 "adrfam": "ipv4", 00:34:13.001 "trsvcid": "4420", 00:34:13.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:13.001 "prchk_reftag": false, 00:34:13.001 "prchk_guard": false, 00:34:13.001 "hdgst": false, 00:34:13.001 "ddgst": false, 00:34:13.001 "psk": "key0", 00:34:13.001 "allow_unrecognized_csi": false, 00:34:13.001 "method": "bdev_nvme_attach_controller", 00:34:13.001 "req_id": 1 00:34:13.001 } 00:34:13.001 Got JSON-RPC error response 00:34:13.001 response: 00:34:13.001 { 00:34:13.001 "code": -19, 00:34:13.001 "message": "No such device" 00:34:13.001 } 00:34:13.001 23:05:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:13.001 23:05:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:13.001 23:05:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:13.001 23:05:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:13.001 23:05:20 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:13.001 23:05:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:13.259 23:05:20 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4BUWilFjH5 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:13.259 23:05:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:13.259 23:05:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:13.259 23:05:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:13.259 23:05:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:13.259 23:05:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:13.259 23:05:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4BUWilFjH5 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4BUWilFjH5 00:34:13.259 23:05:20 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.4BUWilFjH5 00:34:13.259 23:05:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4BUWilFjH5 00:34:13.259 23:05:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4BUWilFjH5 00:34:13.517 23:05:21 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:13.517 23:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:14.082 nvme0n1 00:34:14.082 23:05:21 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:14.082 23:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:14.082 23:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:14.082 23:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:14.082 23:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:14.082 23:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:14.339 23:05:21 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:14.339 23:05:21 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:14.339 23:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:14.597 23:05:22 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:14.597 23:05:22 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:14.597 23:05:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:14.597 23:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:14.597 23:05:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:14.854 23:05:22 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:14.854 23:05:22 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:14.854 23:05:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:14.854 23:05:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:14.854 23:05:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:14.854 23:05:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:14.855 23:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:15.112 23:05:22 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:15.112 23:05:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:15.112 23:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:15.370 23:05:22 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:15.370 23:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:15.370 23:05:22 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:15.627 23:05:23 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:15.627 23:05:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4BUWilFjH5 00:34:15.627 23:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4BUWilFjH5 00:34:15.884 23:05:23 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yQrYY1YBDA 00:34:15.884 23:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yQrYY1YBDA 00:34:16.142 23:05:23 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.142 23:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.399 nvme0n1 00:34:16.400 23:05:24 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:16.400 23:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:16.967 23:05:24 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:16.967 "subsystems": [ 00:34:16.967 { 00:34:16.967 "subsystem": "keyring", 00:34:16.967 "config": [ 00:34:16.967 { 00:34:16.967 "method": "keyring_file_add_key", 00:34:16.967 "params": { 00:34:16.967 "name": "key0", 00:34:16.967 "path": "/tmp/tmp.4BUWilFjH5" 00:34:16.967 } 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "method": "keyring_file_add_key", 00:34:16.967 "params": { 00:34:16.967 "name": "key1", 00:34:16.967 "path": "/tmp/tmp.yQrYY1YBDA" 00:34:16.967 } 00:34:16.967 } 00:34:16.967 ] 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "subsystem": "iobuf", 00:34:16.967 "config": [ 00:34:16.967 { 00:34:16.967 "method": "iobuf_set_options", 00:34:16.967 "params": { 00:34:16.967 "small_pool_count": 8192, 00:34:16.967 "large_pool_count": 1024, 00:34:16.967 "small_bufsize": 8192, 00:34:16.967 "large_bufsize": 135168, 00:34:16.967 "enable_numa": false 00:34:16.967 } 00:34:16.967 } 00:34:16.967 ] 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "subsystem": "sock", 00:34:16.967 "config": [ 00:34:16.967 { 00:34:16.967 "method": "sock_set_default_impl", 00:34:16.967 "params": { 00:34:16.967 "impl_name": "posix" 00:34:16.967 } 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "method": "sock_impl_set_options", 00:34:16.967 "params": { 00:34:16.967 "impl_name": "ssl", 00:34:16.967 "recv_buf_size": 4096, 00:34:16.967 "send_buf_size": 4096, 00:34:16.967 "enable_recv_pipe": true, 00:34:16.967 "enable_quickack": false, 00:34:16.967 "enable_placement_id": 0, 00:34:16.967 "enable_zerocopy_send_server": true, 00:34:16.967 "enable_zerocopy_send_client": false, 00:34:16.967 "zerocopy_threshold": 0, 00:34:16.967 "tls_version": 0, 00:34:16.967 "enable_ktls": false 00:34:16.967 } 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "method": "sock_impl_set_options", 00:34:16.967 "params": { 00:34:16.967 "impl_name": "posix", 00:34:16.967 "recv_buf_size": 2097152, 00:34:16.967 "send_buf_size": 2097152, 00:34:16.967 "enable_recv_pipe": true, 00:34:16.967 "enable_quickack": false, 00:34:16.967 "enable_placement_id": 0, 00:34:16.967 "enable_zerocopy_send_server": true, 00:34:16.967 "enable_zerocopy_send_client": false, 00:34:16.967 "zerocopy_threshold": 0, 00:34:16.967 "tls_version": 0, 00:34:16.967 "enable_ktls": false 00:34:16.967 } 00:34:16.967 } 00:34:16.967 ] 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "subsystem": "vmd", 00:34:16.967 "config": [] 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "subsystem": "accel", 00:34:16.967 "config": [ 00:34:16.967 { 00:34:16.967 "method": "accel_set_options", 00:34:16.967 "params": { 00:34:16.967 "small_cache_size": 128, 00:34:16.967 "large_cache_size": 16, 00:34:16.967 "task_count": 2048, 00:34:16.967 "sequence_count": 2048, 00:34:16.967 "buf_count": 2048 00:34:16.967 } 00:34:16.967 } 00:34:16.967 ] 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "subsystem": "bdev", 00:34:16.967 "config": [ 00:34:16.967 { 00:34:16.967 "method": "bdev_set_options", 00:34:16.967 "params": { 00:34:16.967 "bdev_io_pool_size": 65535, 00:34:16.967 "bdev_io_cache_size": 256, 00:34:16.967 "bdev_auto_examine": true, 00:34:16.967 "iobuf_small_cache_size": 128, 00:34:16.967 "iobuf_large_cache_size": 16 00:34:16.967 } 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "method": "bdev_raid_set_options", 00:34:16.967 "params": { 00:34:16.967 "process_window_size_kb": 1024, 00:34:16.967 "process_max_bandwidth_mb_sec": 0 00:34:16.967 } 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "method": "bdev_iscsi_set_options", 00:34:16.967 "params": { 00:34:16.967 "timeout_sec": 30 00:34:16.967 } 00:34:16.967 }, 00:34:16.967 { 00:34:16.967 "method": "bdev_nvme_set_options", 00:34:16.967 "params": { 00:34:16.967 "action_on_timeout": "none", 00:34:16.967 "timeout_us": 0, 00:34:16.967 "timeout_admin_us": 0, 00:34:16.967 "keep_alive_timeout_ms": 10000, 00:34:16.967 "arbitration_burst": 0, 00:34:16.967 "low_priority_weight": 0, 00:34:16.967 "medium_priority_weight": 0, 00:34:16.967 "high_priority_weight": 0, 00:34:16.967 "nvme_adminq_poll_period_us": 10000, 00:34:16.967 "nvme_ioq_poll_period_us": 0, 00:34:16.967 "io_queue_requests": 512, 00:34:16.967 "delay_cmd_submit": true, 00:34:16.967 "transport_retry_count": 4, 00:34:16.967 "bdev_retry_count": 3, 00:34:16.967 "transport_ack_timeout": 0, 00:34:16.967 "ctrlr_loss_timeout_sec": 0, 00:34:16.967 "reconnect_delay_sec": 0, 00:34:16.967 "fast_io_fail_timeout_sec": 0, 00:34:16.967 "disable_auto_failback": false, 00:34:16.967 "generate_uuids": false, 00:34:16.967 "transport_tos": 0, 00:34:16.967 "nvme_error_stat": false, 00:34:16.967 "rdma_srq_size": 0, 00:34:16.967 "io_path_stat": false, 00:34:16.967 "allow_accel_sequence": false, 00:34:16.967 "rdma_max_cq_size": 0, 00:34:16.967 "rdma_cm_event_timeout_ms": 0, 00:34:16.968 "dhchap_digests": [ 00:34:16.968 "sha256", 00:34:16.968 "sha384", 00:34:16.968 "sha512" 00:34:16.968 ], 00:34:16.968 "dhchap_dhgroups": [ 00:34:16.968 "null", 00:34:16.968 "ffdhe2048", 00:34:16.968 "ffdhe3072", 00:34:16.968 "ffdhe4096", 00:34:16.968 "ffdhe6144", 00:34:16.968 "ffdhe8192" 00:34:16.968 ], 00:34:16.968 "rdma_umr_per_io": false 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "bdev_nvme_attach_controller", 00:34:16.968 "params": { 00:34:16.968 "name": "nvme0", 00:34:16.968 "trtype": "TCP", 00:34:16.968 "adrfam": "IPv4", 00:34:16.968 "traddr": "127.0.0.1", 00:34:16.968 "trsvcid": "4420", 00:34:16.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.968 "prchk_reftag": false, 00:34:16.968 "prchk_guard": false, 00:34:16.968 "ctrlr_loss_timeout_sec": 0, 00:34:16.968 "reconnect_delay_sec": 0, 00:34:16.968 "fast_io_fail_timeout_sec": 0, 00:34:16.968 "psk": "key0", 00:34:16.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.968 "hdgst": false, 00:34:16.968 "ddgst": false, 00:34:16.968 "multipath": "multipath" 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "bdev_nvme_set_hotplug", 00:34:16.968 "params": { 00:34:16.968 "period_us": 100000, 00:34:16.968 "enable": false 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "bdev_wait_for_examine" 00:34:16.968 } 00:34:16.968 ] 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "subsystem": "nbd", 00:34:16.968 "config": [] 00:34:16.968 } 00:34:16.968 ] 00:34:16.968 }' 00:34:16.968 23:05:24 keyring_file -- keyring/file.sh@115 -- # killprocess 258138 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 258138 ']' 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 258138 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 258138 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 258138' 00:34:16.968 killing process with pid 258138 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@973 -- # kill 258138 00:34:16.968 Received shutdown signal, test time was about 1.000000 seconds 00:34:16.968 00:34:16.968 Latency(us) 00:34:16.968 [2024-12-10T22:05:24.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.968 [2024-12-10T22:05:24.700Z] =================================================================================================================== 00:34:16.968 [2024-12-10T22:05:24.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@978 -- # wait 258138 00:34:16.968 23:05:24 keyring_file -- keyring/file.sh@118 -- # bperfpid=259695 00:34:16.968 23:05:24 keyring_file -- keyring/file.sh@120 -- # waitforlisten 259695 /var/tmp/bperf.sock 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 259695 ']' 00:34:16.968 23:05:24 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:16.968 23:05:24 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.968 23:05:24 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:16.968 "subsystems": [ 00:34:16.968 { 00:34:16.968 "subsystem": "keyring", 00:34:16.968 "config": [ 00:34:16.968 { 00:34:16.968 "method": "keyring_file_add_key", 00:34:16.968 "params": { 00:34:16.968 "name": "key0", 00:34:16.968 "path": "/tmp/tmp.4BUWilFjH5" 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "keyring_file_add_key", 00:34:16.968 "params": { 00:34:16.968 "name": "key1", 00:34:16.968 "path": "/tmp/tmp.yQrYY1YBDA" 00:34:16.968 } 00:34:16.968 } 00:34:16.968 ] 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "subsystem": "iobuf", 00:34:16.968 "config": [ 00:34:16.968 { 00:34:16.968 "method": "iobuf_set_options", 00:34:16.968 "params": { 00:34:16.968 "small_pool_count": 8192, 00:34:16.968 "large_pool_count": 1024, 00:34:16.968 "small_bufsize": 8192, 00:34:16.968 "large_bufsize": 135168, 00:34:16.968 "enable_numa": false 00:34:16.968 } 00:34:16.968 } 00:34:16.968 ] 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "subsystem": "sock", 00:34:16.968 "config": [ 00:34:16.968 { 00:34:16.968 "method": "sock_set_default_impl", 00:34:16.968 "params": { 00:34:16.968 "impl_name": "posix" 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "sock_impl_set_options", 00:34:16.968 "params": { 00:34:16.968 "impl_name": "ssl", 00:34:16.968 "recv_buf_size": 4096, 00:34:16.968 "send_buf_size": 4096, 00:34:16.968 "enable_recv_pipe": true, 00:34:16.968 "enable_quickack": false, 00:34:16.968 "enable_placement_id": 0, 00:34:16.968 "enable_zerocopy_send_server": true, 00:34:16.968 "enable_zerocopy_send_client": false, 00:34:16.968 "zerocopy_threshold": 0, 00:34:16.968 "tls_version": 0, 00:34:16.968 "enable_ktls": false 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "sock_impl_set_options", 00:34:16.968 "params": { 00:34:16.968 "impl_name": "posix", 00:34:16.968 "recv_buf_size": 2097152, 00:34:16.968 "send_buf_size": 2097152, 00:34:16.968 "enable_recv_pipe": true, 00:34:16.968 "enable_quickack": false, 00:34:16.968 "enable_placement_id": 0, 00:34:16.968 "enable_zerocopy_send_server": true, 00:34:16.968 "enable_zerocopy_send_client": false, 00:34:16.968 "zerocopy_threshold": 0, 00:34:16.968 "tls_version": 0, 00:34:16.968 "enable_ktls": false 00:34:16.968 } 00:34:16.968 } 00:34:16.968 ] 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "subsystem": "vmd", 00:34:16.968 "config": [] 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "subsystem": "accel", 00:34:16.968 "config": [ 00:34:16.968 { 00:34:16.968 "method": "accel_set_options", 00:34:16.968 "params": { 00:34:16.968 "small_cache_size": 128, 00:34:16.968 "large_cache_size": 16, 00:34:16.968 "task_count": 2048, 00:34:16.968 "sequence_count": 2048, 00:34:16.968 "buf_count": 2048 00:34:16.968 } 00:34:16.968 } 00:34:16.968 ] 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "subsystem": "bdev", 00:34:16.968 "config": [ 00:34:16.968 { 00:34:16.968 "method": "bdev_set_options", 00:34:16.968 "params": { 00:34:16.968 "bdev_io_pool_size": 65535, 00:34:16.968 "bdev_io_cache_size": 256, 00:34:16.968 "bdev_auto_examine": true, 00:34:16.968 "iobuf_small_cache_size": 128, 00:34:16.968 "iobuf_large_cache_size": 16 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "bdev_raid_set_options", 00:34:16.968 "params": { 00:34:16.968 "process_window_size_kb": 1024, 00:34:16.968 "process_max_bandwidth_mb_sec": 0 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "bdev_iscsi_set_options", 00:34:16.968 "params": { 00:34:16.968 "timeout_sec": 30 00:34:16.968 } 00:34:16.968 }, 00:34:16.968 { 00:34:16.968 "method": "bdev_nvme_set_options", 00:34:16.968 "params": { 00:34:16.968 "action_on_timeout": "none", 00:34:16.968 "timeout_us": 0, 00:34:16.968 "timeout_admin_us": 0, 00:34:16.968 "keep_alive_timeout_ms": 10000, 00:34:16.968 "arbitration_burst": 0, 00:34:16.968 "low_priority_weight": 0, 00:34:16.968 "medium_priority_weight": 0, 00:34:16.968 "high_priority_weight": 0, 00:34:16.968 "nvme_adminq_poll_period_us": 10000, 00:34:16.968 "nvme_ioq_poll_period_us": 0, 00:34:16.968 "io_queue_requests": 512, 00:34:16.968 "delay_cmd_submit": true, 00:34:16.968 "transport_retry_count": 4, 00:34:16.968 "bdev_retry_count": 3, 00:34:16.968 "transport_ack_timeout": 0, 00:34:16.968 "ctrlr_loss_timeout_sec": 0, 00:34:16.968 "reconnect_delay_sec": 0, 00:34:16.968 "fast_io_fail_timeout_sec": 0, 00:34:16.968 "disable_auto_failback": false, 00:34:16.968 "generate_uuids": false, 00:34:16.968 "transport_tos": 0, 00:34:16.968 "nvme_error_stat": false, 00:34:16.968 "rdma_srq_size": 0, 00:34:16.968 "io_path_stat": false, 00:34:16.969 "allow_accel_sequence": false, 00:34:16.969 "rdma_max_cq_size": 0, 00:34:16.969 "rdma_cm_event_timeout_ms": 0, 00:34:16.969 "dhchap_digests": [ 00:34:16.969 "sha256", 00:34:16.969 "sha384", 00:34:16.969 "sha512" 00:34:16.969 ], 00:34:16.969 "dhchap_dhgroups": [ 00:34:16.969 "null", 00:34:16.969 "ffdhe2048", 00:34:16.969 "ffdhe3072", 00:34:16.969 "ffdhe4096", 00:34:16.969 "ffdhe6144", 00:34:16.969 "ffdhe8192" 00:34:16.969 ], 00:34:16.969 "rdma_umr_per_io": false 00:34:16.969 } 00:34:16.969 }, 00:34:16.969 { 00:34:16.969 "method": "bdev_nvme_attach_controller", 00:34:16.969 "params": { 00:34:16.969 "name": "nvme0", 00:34:16.969 "trtype": "TCP", 00:34:16.969 "adrfam": "IPv4", 00:34:16.969 "traddr": "127.0.0.1", 00:34:16.969 "trsvcid": "4420", 00:34:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.969 "prchk_reftag": false, 00:34:16.969 "prchk_guard": false, 00:34:16.969 "ctrlr_loss_timeout_sec": 0, 00:34:16.969 "reconnect_delay_sec": 0, 00:34:16.969 "fast_io_fail_timeout_sec": 0, 00:34:16.969 "psk": "key0", 00:34:16.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.969 "hdgst": false, 00:34:16.969 "ddgst": false, 00:34:16.969 "multipath": "multipath" 00:34:16.969 } 00:34:16.969 }, 00:34:16.969 { 00:34:16.969 "method": "bdev_nvme_set_hotplug", 00:34:16.969 "params": { 00:34:16.969 "period_us": 100000, 00:34:16.969 "enable": false 00:34:16.969 } 00:34:16.969 }, 00:34:16.969 { 00:34:16.969 "method": "bdev_wait_for_examine" 00:34:16.969 } 00:34:16.969 ] 00:34:16.969 }, 00:34:16.969 { 00:34:16.969 "subsystem": "nbd", 00:34:16.969 "config": [] 00:34:16.969 } 00:34:16.969 ] 00:34:16.969 }' 00:34:16.969 23:05:24 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:16.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:16.969 23:05:24 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.969 23:05:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:17.227 [2024-12-10 23:05:24.734532] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:34:17.227 [2024-12-10 23:05:24.734632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259695 ] 00:34:17.227 [2024-12-10 23:05:24.801060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.227 [2024-12-10 23:05:24.856936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.485 [2024-12-10 23:05:25.042660] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:17.485 23:05:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.485 23:05:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:17.485 23:05:25 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:17.485 23:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.485 23:05:25 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:17.743 23:05:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:17.743 23:05:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:17.743 23:05:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:17.743 23:05:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:17.743 23:05:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:17.743 23:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.743 23:05:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:18.001 23:05:25 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:18.001 23:05:25 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:18.001 23:05:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:18.001 23:05:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:18.001 23:05:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:18.001 23:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:18.001 23:05:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:18.566 23:05:25 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:18.566 23:05:25 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:18.566 23:05:25 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:18.566 23:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:18.566 23:05:26 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:18.566 23:05:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:18.566 23:05:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4BUWilFjH5 /tmp/tmp.yQrYY1YBDA 00:34:18.566 23:05:26 keyring_file -- keyring/file.sh@20 -- # killprocess 259695 00:34:18.566 23:05:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 259695 ']' 00:34:18.566 23:05:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 259695 00:34:18.566 23:05:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:18.566 23:05:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:18.566 23:05:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259695 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259695' 00:34:18.823 killing process with pid 259695 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@973 -- # kill 259695 00:34:18.823 Received shutdown signal, test time was about 1.000000 seconds 00:34:18.823 00:34:18.823 Latency(us) 00:34:18.823 [2024-12-10T22:05:26.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.823 [2024-12-10T22:05:26.555Z] =================================================================================================================== 00:34:18.823 [2024-12-10T22:05:26.555Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@978 -- # wait 259695 00:34:18.823 23:05:26 keyring_file -- keyring/file.sh@21 -- # killprocess 258124 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 258124 ']' 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 258124 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:18.823 23:05:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 258124 00:34:19.081 23:05:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.081 23:05:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.081 23:05:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 258124' 00:34:19.081 killing process with pid 258124 00:34:19.081 23:05:26 keyring_file -- common/autotest_common.sh@973 -- # kill 258124 00:34:19.081 23:05:26 keyring_file -- common/autotest_common.sh@978 -- # wait 258124 00:34:19.339 00:34:19.339 real 0m14.682s 00:34:19.339 user 0m37.329s 00:34:19.339 sys 0m3.220s 00:34:19.339 23:05:26 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.339 23:05:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:19.339 ************************************ 00:34:19.339 END TEST keyring_file 00:34:19.339 ************************************ 00:34:19.339 23:05:27 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:19.339 23:05:27 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:19.339 23:05:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:19.339 23:05:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.339 23:05:27 -- common/autotest_common.sh@10 -- # set +x 00:34:19.339 ************************************ 00:34:19.339 START TEST keyring_linux 00:34:19.339 ************************************ 00:34:19.339 23:05:27 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:19.339 Joined session keyring: 918236040 00:34:19.599 * Looking for test storage... 00:34:19.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:19.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.599 --rc genhtml_branch_coverage=1 00:34:19.599 --rc genhtml_function_coverage=1 00:34:19.599 --rc genhtml_legend=1 00:34:19.599 --rc geninfo_all_blocks=1 00:34:19.599 --rc geninfo_unexecuted_blocks=1 00:34:19.599 00:34:19.599 ' 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:19.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.599 --rc genhtml_branch_coverage=1 00:34:19.599 --rc genhtml_function_coverage=1 00:34:19.599 --rc genhtml_legend=1 00:34:19.599 --rc geninfo_all_blocks=1 00:34:19.599 --rc geninfo_unexecuted_blocks=1 00:34:19.599 00:34:19.599 ' 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:19.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.599 --rc genhtml_branch_coverage=1 00:34:19.599 --rc genhtml_function_coverage=1 00:34:19.599 --rc genhtml_legend=1 00:34:19.599 --rc geninfo_all_blocks=1 00:34:19.599 --rc geninfo_unexecuted_blocks=1 00:34:19.599 00:34:19.599 ' 00:34:19.599 23:05:27 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:19.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.599 --rc genhtml_branch_coverage=1 00:34:19.599 --rc genhtml_function_coverage=1 00:34:19.599 --rc genhtml_legend=1 00:34:19.599 --rc geninfo_all_blocks=1 00:34:19.599 --rc geninfo_unexecuted_blocks=1 00:34:19.599 00:34:19.599 ' 00:34:19.599 23:05:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:19.599 23:05:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.599 23:05:27 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.599 23:05:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.599 23:05:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.599 23:05:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.599 23:05:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.599 23:05:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:19.600 23:05:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:19.600 /tmp/:spdk-test:key0 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:19.600 23:05:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:19.600 23:05:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:19.600 /tmp/:spdk-test:key1 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=260087 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:19.600 23:05:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 260087 00:34:19.600 23:05:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 260087 ']' 00:34:19.600 23:05:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.600 23:05:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.600 23:05:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.600 23:05:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.600 23:05:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:19.859 [2024-12-10 23:05:27.369762] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:34:19.859 [2024-12-10 23:05:27.369877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260087 ] 00:34:19.859 [2024-12-10 23:05:27.439855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.859 [2024-12-10 23:05:27.501211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:20.117 23:05:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:20.117 [2024-12-10 23:05:27.781332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.117 null0 00:34:20.117 [2024-12-10 23:05:27.813382] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:20.117 [2024-12-10 23:05:27.813922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.117 23:05:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:20.117 443514303 00:34:20.117 23:05:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:20.117 310147011 00:34:20.117 23:05:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=260104 00:34:20.117 23:05:27 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:20.117 23:05:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 260104 /var/tmp/bperf.sock 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 260104 ']' 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:20.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:20.117 23:05:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:20.375 [2024-12-10 23:05:27.880230] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:34:20.375 [2024-12-10 23:05:27.880305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260104 ] 00:34:20.375 [2024-12-10 23:05:27.945436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.375 [2024-12-10 23:05:28.003734] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.633 23:05:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.633 23:05:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:20.633 23:05:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:20.633 23:05:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:20.894 23:05:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:20.894 23:05:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:21.188 23:05:28 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:21.188 23:05:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:21.472 [2024-12-10 23:05:28.981056] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:21.472 nvme0n1 00:34:21.472 23:05:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:21.472 23:05:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:21.472 23:05:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:21.472 23:05:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:21.472 23:05:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:21.472 23:05:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.729 23:05:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:21.729 23:05:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:21.729 23:05:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:21.729 23:05:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:21.729 23:05:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.729 23:05:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:21.729 23:05:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.987 23:05:29 keyring_linux -- keyring/linux.sh@25 -- # sn=443514303 00:34:21.987 23:05:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:21.987 23:05:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:21.987 23:05:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 443514303 == \4\4\3\5\1\4\3\0\3 ]] 00:34:21.987 23:05:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 443514303 00:34:21.987 23:05:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:21.987 23:05:29 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:22.244 Running I/O for 1 seconds... 00:34:23.177 10118.00 IOPS, 39.52 MiB/s 00:34:23.177 Latency(us) 00:34:23.177 [2024-12-10T22:05:30.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.177 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:23.177 nvme0n1 : 1.01 10121.22 39.54 0.00 0.00 12563.50 6262.33 17282.09 00:34:23.177 [2024-12-10T22:05:30.909Z] =================================================================================================================== 00:34:23.177 [2024-12-10T22:05:30.909Z] Total : 10121.22 39.54 0.00 0.00 12563.50 6262.33 17282.09 00:34:23.177 { 00:34:23.177 "results": [ 00:34:23.177 { 00:34:23.177 "job": "nvme0n1", 00:34:23.177 "core_mask": "0x2", 00:34:23.177 "workload": "randread", 00:34:23.177 "status": "finished", 00:34:23.177 "queue_depth": 128, 00:34:23.177 "io_size": 4096, 00:34:23.177 "runtime": 1.012329, 00:34:23.177 "iops": 10121.215533685196, 00:34:23.177 "mibps": 39.535998178457795, 00:34:23.177 "io_failed": 0, 00:34:23.177 "io_timeout": 0, 00:34:23.177 "avg_latency_us": 12563.504349448023, 00:34:23.177 "min_latency_us": 6262.328888888889, 00:34:23.177 "max_latency_us": 17282.085925925927 00:34:23.177 } 00:34:23.177 ], 00:34:23.177 "core_count": 1 00:34:23.177 } 00:34:23.177 23:05:30 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:23.177 23:05:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:23.435 23:05:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:23.435 23:05:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:23.435 23:05:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:23.435 23:05:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:23.435 23:05:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:23.435 23:05:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:23.693 23:05:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:23.693 23:05:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:23.693 23:05:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:23.693 23:05:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:23.693 23:05:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:23.693 23:05:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:23.693 23:05:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:23.693 23:05:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:23.693 23:05:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:23.693 23:05:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:23.693 23:05:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:23.693 23:05:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:23.952 [2024-12-10 23:05:31.586924] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:23.952 [2024-12-10 23:05:31.587795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ef60 (107): Transport endpoint is not connected 00:34:23.952 [2024-12-10 23:05:31.588787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ef60 (9): Bad file descriptor 00:34:23.952 [2024-12-10 23:05:31.589787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:23.952 [2024-12-10 23:05:31.589816] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:23.952 [2024-12-10 23:05:31.589847] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:23.952 [2024-12-10 23:05:31.589862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:23.952 request: 00:34:23.952 { 00:34:23.952 "name": "nvme0", 00:34:23.952 "trtype": "tcp", 00:34:23.952 "traddr": "127.0.0.1", 00:34:23.952 "adrfam": "ipv4", 00:34:23.952 "trsvcid": "4420", 00:34:23.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.952 "prchk_reftag": false, 00:34:23.952 "prchk_guard": false, 00:34:23.952 "hdgst": false, 00:34:23.952 "ddgst": false, 00:34:23.952 "psk": ":spdk-test:key1", 00:34:23.952 "allow_unrecognized_csi": false, 00:34:23.952 "method": "bdev_nvme_attach_controller", 00:34:23.952 "req_id": 1 00:34:23.952 } 00:34:23.952 Got JSON-RPC error response 00:34:23.952 response: 00:34:23.952 { 00:34:23.952 "code": -5, 00:34:23.952 "message": "Input/output error" 00:34:23.952 } 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@33 -- # sn=443514303 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 443514303 00:34:23.952 1 links removed 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@33 -- # sn=310147011 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 310147011 00:34:23.952 1 links removed 00:34:23.952 23:05:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 260104 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 260104 ']' 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 260104 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260104 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260104' 00:34:23.952 killing process with pid 260104 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@973 -- # kill 260104 00:34:23.952 Received shutdown signal, test time was about 1.000000 seconds 00:34:23.952 00:34:23.952 Latency(us) 00:34:23.952 [2024-12-10T22:05:31.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.952 [2024-12-10T22:05:31.684Z] =================================================================================================================== 00:34:23.952 [2024-12-10T22:05:31.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:23.952 23:05:31 keyring_linux -- common/autotest_common.sh@978 -- # wait 260104 00:34:24.211 23:05:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 260087 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 260087 ']' 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 260087 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260087 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260087' 00:34:24.211 killing process with pid 260087 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@973 -- # kill 260087 00:34:24.211 23:05:31 keyring_linux -- common/autotest_common.sh@978 -- # wait 260087 00:34:24.779 00:34:24.780 real 0m5.295s 00:34:24.780 user 0m10.533s 00:34:24.780 sys 0m1.563s 00:34:24.780 23:05:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.780 23:05:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:24.780 ************************************ 00:34:24.780 END TEST keyring_linux 00:34:24.780 ************************************ 00:34:24.780 23:05:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:24.780 23:05:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:24.780 23:05:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:24.780 23:05:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:24.780 23:05:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:24.780 23:05:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:24.780 23:05:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:24.780 23:05:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:24.780 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:34:24.780 23:05:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:24.780 23:05:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:24.780 23:05:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:24.780 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:34:26.684 INFO: APP EXITING 00:34:26.684 INFO: killing all VMs 00:34:26.684 INFO: killing vhost app 00:34:26.684 INFO: EXIT DONE 00:34:28.058 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:34:28.058 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:34:28.058 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:34:28.058 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:34:28.058 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:34:28.058 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:34:28.058 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:34:28.058 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:34:28.058 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:34:28.058 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:34:28.058 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:34:28.058 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:34:28.058 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:34:28.058 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:34:28.058 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:34:28.058 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:34:28.058 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:34:29.432 Cleaning 00:34:29.432 Removing: /var/run/dpdk/spdk0/config 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:29.432 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:29.432 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:29.432 Removing: /var/run/dpdk/spdk1/config 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:29.432 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:29.432 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:29.432 Removing: /var/run/dpdk/spdk2/config 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:29.432 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:29.432 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:29.432 Removing: /var/run/dpdk/spdk3/config 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:29.432 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:29.432 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:29.432 Removing: /var/run/dpdk/spdk4/config 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:29.432 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:29.432 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:29.432 Removing: /dev/shm/bdev_svc_trace.1 00:34:29.432 Removing: /dev/shm/nvmf_trace.0 00:34:29.432 Removing: /dev/shm/spdk_tgt_trace.pid4132692 00:34:29.432 Removing: /var/run/dpdk/spdk0 00:34:29.432 Removing: /var/run/dpdk/spdk1 00:34:29.432 Removing: /var/run/dpdk/spdk2 00:34:29.432 Removing: /var/run/dpdk/spdk3 00:34:29.432 Removing: /var/run/dpdk/spdk4 00:34:29.432 Removing: /var/run/dpdk/spdk_pid100732 00:34:29.432 Removing: /var/run/dpdk/spdk_pid105588 00:34:29.432 Removing: /var/run/dpdk/spdk_pid108357 00:34:29.432 Removing: /var/run/dpdk/spdk_pid112257 00:34:29.432 Removing: /var/run/dpdk/spdk_pid113205 00:34:29.432 Removing: /var/run/dpdk/spdk_pid114252 00:34:29.433 Removing: /var/run/dpdk/spdk_pid115275 00:34:29.433 Removing: /var/run/dpdk/spdk_pid118034 00:34:29.433 Removing: /var/run/dpdk/spdk_pid120623 00:34:29.433 Removing: /var/run/dpdk/spdk_pid122998 00:34:29.433 Removing: /var/run/dpdk/spdk_pid127230 00:34:29.433 Removing: /var/run/dpdk/spdk_pid127236 00:34:29.433 Removing: /var/run/dpdk/spdk_pid130252 00:34:29.433 Removing: /var/run/dpdk/spdk_pid130385 00:34:29.433 Removing: /var/run/dpdk/spdk_pid130886 00:34:29.433 Removing: /var/run/dpdk/spdk_pid131295 00:34:29.433 Removing: /var/run/dpdk/spdk_pid131419 00:34:29.433 Removing: /var/run/dpdk/spdk_pid134082 00:34:29.433 Removing: /var/run/dpdk/spdk_pid134524 00:34:29.433 Removing: /var/run/dpdk/spdk_pid137192 00:34:29.433 Removing: /var/run/dpdk/spdk_pid139051 00:34:29.433 Removing: /var/run/dpdk/spdk_pid142591 00:34:29.433 Removing: /var/run/dpdk/spdk_pid145928 00:34:29.433 Removing: /var/run/dpdk/spdk_pid152427 00:34:29.433 Removing: /var/run/dpdk/spdk_pid156899 00:34:29.433 Removing: /var/run/dpdk/spdk_pid156905 00:34:29.433 Removing: /var/run/dpdk/spdk_pid170017 00:34:29.433 Removing: /var/run/dpdk/spdk_pid170542 00:34:29.433 Removing: /var/run/dpdk/spdk_pid170954 00:34:29.433 Removing: /var/run/dpdk/spdk_pid171363 00:34:29.433 Removing: /var/run/dpdk/spdk_pid171947 00:34:29.433 Removing: /var/run/dpdk/spdk_pid172474 00:34:29.692 Removing: /var/run/dpdk/spdk_pid172878 00:34:29.692 Removing: /var/run/dpdk/spdk_pid173292 00:34:29.692 Removing: /var/run/dpdk/spdk_pid175801 00:34:29.692 Removing: /var/run/dpdk/spdk_pid176013 00:34:29.692 Removing: /var/run/dpdk/spdk_pid179864 00:34:29.692 Removing: /var/run/dpdk/spdk_pid179920 00:34:29.692 Removing: /var/run/dpdk/spdk_pid183293 00:34:29.692 Removing: /var/run/dpdk/spdk_pid185906 00:34:29.692 Removing: /var/run/dpdk/spdk_pid192706 00:34:29.692 Removing: /var/run/dpdk/spdk_pid193218 00:34:29.692 Removing: /var/run/dpdk/spdk_pid195609 00:34:29.692 Removing: /var/run/dpdk/spdk_pid195883 00:34:29.692 Removing: /var/run/dpdk/spdk_pid198501 00:34:29.692 Removing: /var/run/dpdk/spdk_pid202822 00:34:29.692 Removing: /var/run/dpdk/spdk_pid204975 00:34:29.692 Removing: /var/run/dpdk/spdk_pid211238 00:34:29.693 Removing: /var/run/dpdk/spdk_pid216444 00:34:29.693 Removing: /var/run/dpdk/spdk_pid217743 00:34:29.693 Removing: /var/run/dpdk/spdk_pid218410 00:34:29.693 Removing: /var/run/dpdk/spdk_pid228590 00:34:29.693 Removing: /var/run/dpdk/spdk_pid230849 00:34:29.693 Removing: /var/run/dpdk/spdk_pid232849 00:34:29.693 Removing: /var/run/dpdk/spdk_pid238390 00:34:29.693 Removing: /var/run/dpdk/spdk_pid238517 00:34:29.693 Removing: /var/run/dpdk/spdk_pid241423 00:34:29.693 Removing: /var/run/dpdk/spdk_pid242719 00:34:29.693 Removing: /var/run/dpdk/spdk_pid244129 00:34:29.693 Removing: /var/run/dpdk/spdk_pid244974 00:34:29.693 Removing: /var/run/dpdk/spdk_pid246459 00:34:29.693 Removing: /var/run/dpdk/spdk_pid247255 00:34:29.693 Removing: /var/run/dpdk/spdk_pid252744 00:34:29.693 Removing: /var/run/dpdk/spdk_pid253046 00:34:29.693 Removing: /var/run/dpdk/spdk_pid253439 00:34:29.693 Removing: /var/run/dpdk/spdk_pid254997 00:34:29.693 Removing: /var/run/dpdk/spdk_pid255397 00:34:29.693 Removing: /var/run/dpdk/spdk_pid255671 00:34:29.693 Removing: /var/run/dpdk/spdk_pid258124 00:34:29.693 Removing: /var/run/dpdk/spdk_pid258138 00:34:29.693 Removing: /var/run/dpdk/spdk_pid259695 00:34:29.693 Removing: /var/run/dpdk/spdk_pid260087 00:34:29.693 Removing: /var/run/dpdk/spdk_pid260104 00:34:29.693 Removing: /var/run/dpdk/spdk_pid28544 00:34:29.693 Removing: /var/run/dpdk/spdk_pid31827 00:34:29.693 Removing: /var/run/dpdk/spdk_pid35656 00:34:29.693 Removing: /var/run/dpdk/spdk_pid39931 00:34:29.693 Removing: /var/run/dpdk/spdk_pid39983 00:34:29.693 Removing: /var/run/dpdk/spdk_pid40698 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4131003 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4131747 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4132692 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4133013 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4133704 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4133844 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4134565 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4134689 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4134953 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4136257 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4137077 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4137395 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4137592 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4137855 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4138124 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4138281 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4138433 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4138621 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4138937 00:34:29.693 Removing: /var/run/dpdk/spdk_pid41404 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4141395 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4141594 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4141754 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4141768 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4142188 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4142203 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4142508 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4142633 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4142806 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4142930 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4143095 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4143111 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4143601 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4143761 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4143968 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4146076 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4148720 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4155869 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4156296 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4158906 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4159188 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4162341 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4166065 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4168257 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4174683 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4179940 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4181253 00:34:29.693 Removing: /var/run/dpdk/spdk_pid4181895 00:34:29.951 Removing: /var/run/dpdk/spdk_pid4192190 00:34:29.951 Removing: /var/run/dpdk/spdk_pid42504 00:34:29.951 Removing: /var/run/dpdk/spdk_pid42866 00:34:29.951 Removing: /var/run/dpdk/spdk_pid42921 00:34:29.951 Removing: /var/run/dpdk/spdk_pid43068 00:34:29.951 Removing: /var/run/dpdk/spdk_pid43205 00:34:29.951 Removing: /var/run/dpdk/spdk_pid43207 00:34:29.951 Removing: /var/run/dpdk/spdk_pid43864 00:34:29.951 Removing: /var/run/dpdk/spdk_pid44515 00:34:29.951 Removing: /var/run/dpdk/spdk_pid45061 00:34:29.951 Removing: /var/run/dpdk/spdk_pid45461 00:34:29.951 Removing: /var/run/dpdk/spdk_pid45579 00:34:29.951 Removing: /var/run/dpdk/spdk_pid45724 00:34:29.951 Removing: /var/run/dpdk/spdk_pid46725 00:34:29.952 Removing: /var/run/dpdk/spdk_pid47465 00:34:29.952 Removing: /var/run/dpdk/spdk_pid52688 00:34:29.952 Removing: /var/run/dpdk/spdk_pid655 00:34:29.952 Removing: /var/run/dpdk/spdk_pid80806 00:34:29.952 Removing: /var/run/dpdk/spdk_pid83866 00:34:29.952 Removing: /var/run/dpdk/spdk_pid84932 00:34:29.952 Removing: /var/run/dpdk/spdk_pid86257 00:34:29.952 Removing: /var/run/dpdk/spdk_pid86399 00:34:29.952 Removing: /var/run/dpdk/spdk_pid86542 00:34:29.952 Removing: /var/run/dpdk/spdk_pid86679 00:34:29.952 Removing: /var/run/dpdk/spdk_pid87122 00:34:29.952 Removing: /var/run/dpdk/spdk_pid88449 00:34:29.952 Removing: /var/run/dpdk/spdk_pid89298 00:34:29.952 Removing: /var/run/dpdk/spdk_pid89781 00:34:29.952 Removing: /var/run/dpdk/spdk_pid91862 00:34:29.952 Removing: /var/run/dpdk/spdk_pid92266 00:34:29.952 Removing: /var/run/dpdk/spdk_pid92832 00:34:29.952 Removing: /var/run/dpdk/spdk_pid95222 00:34:29.952 Removing: /var/run/dpdk/spdk_pid98507 00:34:29.952 Removing: /var/run/dpdk/spdk_pid98508 00:34:29.952 Removing: /var/run/dpdk/spdk_pid98509 00:34:29.952 Clean 00:34:29.952 23:05:37 -- common/autotest_common.sh@1453 -- # return 0 00:34:29.952 23:05:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:29.952 23:05:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:29.952 23:05:37 -- common/autotest_common.sh@10 -- # set +x 00:34:29.952 23:05:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:29.952 23:05:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:29.952 23:05:37 -- common/autotest_common.sh@10 -- # set +x 00:34:29.952 23:05:37 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:29.952 23:05:37 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:29.952 23:05:37 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:29.952 23:05:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:29.952 23:05:37 -- spdk/autotest.sh@398 -- # hostname 00:34:29.952 23:05:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:30.209 geninfo: WARNING: invalid characters removed from testname! 00:35:02.292 23:06:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:05.588 23:06:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:08.895 23:06:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.188 23:06:19 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:14.730 23:06:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:18.026 23:06:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:21.324 23:06:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:21.324 23:06:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:21.324 23:06:28 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:21.324 23:06:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:21.324 23:06:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:21.324 23:06:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:21.324 + [[ -n 4059037 ]] 00:35:21.324 + sudo kill 4059037 00:35:21.335 [Pipeline] } 00:35:21.351 [Pipeline] // stage 00:35:21.356 [Pipeline] } 00:35:21.370 [Pipeline] // timeout 00:35:21.376 [Pipeline] } 00:35:21.390 [Pipeline] // catchError 00:35:21.395 [Pipeline] } 00:35:21.410 [Pipeline] // wrap 00:35:21.416 [Pipeline] } 00:35:21.429 [Pipeline] // catchError 00:35:21.439 [Pipeline] stage 00:35:21.441 [Pipeline] { (Epilogue) 00:35:21.454 [Pipeline] catchError 00:35:21.456 [Pipeline] { 00:35:21.469 [Pipeline] echo 00:35:21.470 Cleanup processes 00:35:21.476 [Pipeline] sh 00:35:21.812 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:21.812 271405 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:21.826 [Pipeline] sh 00:35:22.112 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:22.112 ++ awk '{print $1}' 00:35:22.112 ++ grep -v 'sudo pgrep' 00:35:22.112 + sudo kill -9 00:35:22.112 + true 00:35:22.124 [Pipeline] sh 00:35:22.408 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:32.389 [Pipeline] sh 00:35:32.676 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:32.676 Artifacts sizes are good 00:35:32.691 [Pipeline] archiveArtifacts 00:35:32.699 Archiving artifacts 00:35:32.843 [Pipeline] sh 00:35:33.128 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:33.143 [Pipeline] cleanWs 00:35:33.153 [WS-CLEANUP] Deleting project workspace... 00:35:33.153 [WS-CLEANUP] Deferred wipeout is used... 00:35:33.161 [WS-CLEANUP] done 00:35:33.163 [Pipeline] } 00:35:33.180 [Pipeline] // catchError 00:35:33.193 [Pipeline] sh 00:35:33.475 + logger -p user.info -t JENKINS-CI 00:35:33.484 [Pipeline] } 00:35:33.497 [Pipeline] // stage 00:35:33.502 [Pipeline] } 00:35:33.529 [Pipeline] // node 00:35:33.534 [Pipeline] End of Pipeline 00:35:33.564 Finished: SUCCESS